 Welcome to part three of the lecture on attribute grammars. We will continue with attribute grammars, attribute translation grammars and semantic analysis in this lecture as well. So, again a brief recap on attribute grammars. So, attribute grammars are extensions of context free grammars and with each symbol either terminal or non terminal symbol of the grammar there is a set of associates or attributes which are associated with it. Two types of attributes are possible inherited and synthesized and there are rules associated with each of the productions to compute these attributes. So, synthesize attributes of the left hand side non terminal and inherited attributes of the right hand side non terminals are provided with rules to compute the attributes. Rules are of course, strictly local to the production and they have no side effects. So, now the classification of L attributed and S attributed grammars, SIGs are very simple there are only synthesized attributes and we can use any bottom up evaluation strategy in a single pass to evaluate all the attributes. So, because of this property they can be combined with LR parsing and therefore, YACC is very useful in with such SIGs. In L attributed grammars the dependencies go from left to right. So, very specifically the attributes may be synthesized and if they are inherited they have the following limitation. So, in a production P a going to x 1, x 2, x n. Suppose, we consider an attribute x i dot a which is inherited and then x i dot a may depend on only the elements of a i of a that is the inherited attributes of the left hand side non terminal or the attributes of any of the symbols to the left of x i that is k equal to 1 to i minus 1. So, we will concentrate on the SIG and one pass LIG in which we can do attribute evaluation with LL or LL or RD parsing. So, attribute evaluation of LIGs is very simple we just do a depth first search on the you know parse tree. So, given a parse tree with un evaluated attributes the output is a parse tree with consistent attribute values. So, the method is very simple for each child m of n from left to right do evaluate the inherited attributes of m because we are visiting m next. So, d f visit m that is the recursive call and after we visit all the children evaluate this synthesized attributes of n. So, this is the algorithm for attribute evaluation of LL attributed grammars. So, this is the example we saw last time it is indeed an L A G. So, the there are no dependencies which go from right to left. So, the for example in d to T L L dot type equal to T dot type. So, T dot type is synthesized to the left of L and then that is inherited into the non terminal L. So, let us see what makes a grammar a non L attributed grammar or non L A G as they say. So, it is actually a similar grammar the associating type information with names in variable declarations, but let us look at the syntax. So, examples are a b c colon int x y colon float the previous grammar we had int a b c and float a b x y here the type information comes after the variable list. So, this is a Pascal style declaration the effect is similar a b and c are tagged with type integer and x y of course, there is no z here this is a minor error x y are tagged with type real. So, in this case if you look at the production d going to L colon T all others are the same. Now, the type information T is T dot type and that is synthesized and it is there is a non terminal L which has an inherited attribute L dot type. So, we would like L dot type to become T dot type that means, the synthesized attribute actually flows to the left and gets into L dot type. So, this is a right to left dependence and not a left to right dependence and this is exactly what makes the I know attribute grammar non L A G we cannot evaluate it in a left to right pass because when we reach L the type information is not yet ready and after we synthesize that type information we have already passed L. So, since we are looking at one pass evaluation this attribute grammar is not L A G another example of L attributed grammars. So, this is the expression evaluation with you know a nice construct such as L let I D equal to expression in expression. So, this attribute grammar is indeed L attributed. So, if you look at any attribute rules attribute computation rules for example, here the inherited attribute E dot sim tab of this is initialized. So, there is no violation of any rule here and S dot val is synthesized and that becomes E dot val. So, there is no violation of any rule here regarding L A G's in this case the inherited attribute from E 1 comes into E 2 and T. So, that is perfectly valid. So, there is no violation here either same is true about E dot T as well and T to T star F T to F F to E all these are similar. So, let us look at the important production E going to let I D equal to E in E. So, here the inherited attribute which is the symbol table of E 1 is actually you know sent to E 2. So, this is a perfectly valid computation because it depends on the you know E 2 dot sim tab depends on the parents sim tab. So, nothing wrong with that and then the sim table of E 3 now is the symbol table of E 1 over written with the association of I D to E 2. So, since I D E 2 are all you know to the left of E 3 and the E 1 dot sim tab is an inherited attribute there is nothing wrong in this computation rule as well. So, this is also satisfactory as far as the L A G property is concerned none of the others you know actually violate the properties of an L A G therefore, the grammar is indeed L A G. Now, let us look at a modification of the let I D equal to E 2 in E 3 rule modify it as return E 3 with I D equal to E 2. Let us say the semantics of this statement is the same as this let I D equal to E 2 in E 3. So, the semantics would be evaluate E 3 with the occurrences of I D replaced by the value of E 2. So, the attribute grammar actually does not change at all because the it is very similar to this rule. So, we are going to use the association of I D to E 2 inside E 3, but unfortunately when we do that the E 3 dot sim tab now depends on the attributes of symbols to the right. So, it depends on the I D name and it depends on E 2 dot value which are to the right of E 3. So, this is a clear violation of the L A G rules therefore, this grammar with the new production return E 3 with I D equal to E 2 is not an attributed grammar. So, the I promise to show the evaluation sequence using the D F visit procedure. So, it is the same example of evaluating this expression let I equal to 4 in A plus 3. So, the visits sequence is written down here. So, we start here and then we go to 2 the symbol table is initialized to 5 then we actually visit 3 then 4 and then 5 and then we visit 6. So, this is the D F visit strategy. So, this attribute is evaluated here the inherited attribute symbol table which is passed on to 7 and then 8 as well. At this point the number 4 is passed upwards as a sensor attribute. So, these are the visits 10, 11 and 12. So, once these visits are completed we actually go to number 13. So, this is 10, 11, 12 etcetera and then once we do that you know the 13 and 14 are 2 terminal symbols here. So, they really do not have any semantic action associated with them. We go to 15 we are now ready to evaluate the symbol table here using the inherited attribute of the parent and sibling. So, that is done here and then it is passed on to the visit continues the symbol table is passed on to 18 and here the value of A is looked up in the symbol table which produces 4. This becomes the sensorized attribute which is now passed on to 20, 21 and 22. Finally, after 22 we visit plus which is 23. Now, this visits 24 and 25 and then you produce the number 3 which is again passed upwards as synthesized attribute and here the value 4 from this and the value 3 from this are added to produce 7 which is again passed as a synthesized attribute to this visit number during visit number 31 and then finally the value is produced in S in visit number 32. So, this is the D F visit sequence that is used to evaluate the attributes and if you look at it you know the values always flow from left to right then they really never flow from right to left or they flow from top to bottom or bottom to top, but never from right to left. Now, let us look at classification of you know attribute grammars into pure attribute grammars and attributed translation grammars. So, in pure attribute grammars that is what we have studied so far. We asserted that the attribute computation rules do not have any side effects. Now, suppose you know we permit the attribute among the attribute computation rules some small program segments that are like procedure calls or function calls that perform either output or some other side effect free operations and we add this to the attribute grammar then this you know entire system is called as an attributed translation grammar. So, what are the actions possible what are the program segments capable of doing without any serious side effects. For example, they can be symbol table operations insertion of something into the symbol table or deletion of something from the symbol table etcetera. Writing generated code to a file these are very important you know operations that can be performed. So, when we say side effect free computation the none of the contents of you know the nodes the other attributes etcetera are modified by such operations that is what we really want. So, such side effect free computations when added produce an attributed when added produce an attributed translation grammar. So, as a result of these action code segments the evaluation orders may become a bit constrained restricted the all the orders are not possible I will show you an example of this very soon. So, such constraints are added to the attribute dependence graph as implicit edges and these actions can be added to both the S attributed grammars and L attributed grammars. So, in that case they become S A T G and L A T G respectively. So, in our semantic analysis we are going to use L A T G's of one pass variety and of course, S A T G's which are always one pass. So, here is an S A T G for desk calculators. So, it appears very familiar because this is nothing but a Yawk specification for the desk calculator. So, let us look at the most important production expression going to expression plus expression expression minus expression expression star expression and expression slash expression then Tharan's size expression and number. So, because we have only synthesized attributes you know the which we the computation rule can be placed at the end of the production for L A T G's this is not the way to do it it is different we are going to see it very soon. So, dollar dollar equal to dollar one plus dollar three it computes the value of the left hand side expression in terms of the values of the two right hand side expressions. So, dollar one corresponds to the first expression and dollar three corresponds to the second expression. The reason why it has become dollar three instead of dollar two is the symbols are numbered as one two three on the right hand side. So, this is number one this is number two and this is number three. So, again this is the Ack's syntax for writing the attribute computations minus star and slash are very similar for the Tharan's size expressions there is nothing to do it is just a copy of the expression and for number also when we do not write any rule Ack assumes that there is a copy operation. So, whatever is the attribute of number is copied to expression. So, that is the default rule. So, now the this particular you know computation had just one side effect free state action that is the print the value that is all it did not have any other action. Suppose, we permit you know variable names to be introduced into the disk calculator. So, now we have extra productions name equal to expression and of course, name itself the rest of the productions are the same. So, when we use name the value of the name is actually obtained from the name table and whenever we say name equal to expression the value of the name along with the name is introduced into the name table. So, here is the action to do that the production is expression going to name equal to expression. So, sim look is a procedure which looks up a symbol table produce and the parameter is the name that is dollar 1. So, if the name is present in the symbol table it produces a pointer pointing to that entry in the symbol table and if the name is not there in the symbol table then it introduces that name into the symbol table and then you know returns the pointer to the new entry in S P. So, once that happens we can insert the value into the symbol table corresponding to that entry of S P. So, S P pointer value equal to dollar 3 introduces the value of expression and associates it with S P dot I know with the name which is which is pointed to by S P and the value returned by this entire production is the value of the third symbol that is expression. So, dollar dollar equal to dollar 3 does that and when we use a name we look up that name. So, in the sim look dollar 1 looks up that name in the symbol table and returns a pointer to it. If the name is not present in the symbol table it is introduced into the symbol table and the default value of 0 is initialized into the value field of the name and dollar dollar happens to be S P pointer value. So, the value of that entry is returned as the value of the expression on the left hand side. So, these are really actions which do not modify any other attribute of the grammar, but they are certainly not attribute computations. So, that is why this is a synthesized attribute translation grammar S A T G. So, this now we go on and continue with another example. This example actually provides a changed grammar for the declarations that we saw before. So, why should we change the grammar? We changed it because the previous grammar was not L L 1 and for a specifically L A T G we require a grammar which can pass either by L L parsers or by recursive descent parsers. So, here the change is in removing the left recursion. So, declaration going to D list dollar D list going to D D prime D prime going to epsilon or semicolon D list rest of it is the same. Here also for the list of identifiers we remove the left recursion and make it a right recursion. So, L going to I D L prime and L prime going to epsilon or comma L rest of the productions are the same and the semantic actions in the case of I L A G can always be return at the end of the production simply because the order in which we write the semantic rules within the production is not very relevant you know the dependences actually indicate the order in which the computations must happen. So, given an attribute grammar of this kind it will not be positive this is this does not have any actions associated with it. So, and we will not be in a position to translate this into a program directly whereas, if you consider the L A T G we can translate this entire program into sorry entire grammar into a program using an automatic generator. So, let us see how the L A T G specification is different from the L A G and S A G specification. The grammar here is the same that we had here. So, for the productions declaration going to D list dollar D list going to D D prime and D prime going to epsilon or semicolon D list there are no actions. So, let us consider this number four the rule of computation can be attached at the end, but this is L A T G therefore, initialization of the inherited attributes of L have to be done just before we actually parse the string generated by this L. So, L dot type equal to T dot type. So, T dot type is a synthesized attribute of T L dot type is the inherited attribute of L. So, we are computing that just before L. So, this is the characteristic of L A T G compute the inherited attributes of a non terminal just before that non terminal is processed. Then we have T to int which the rule can be attached then it really you know is it does not matter. So, T to float there is no other order possible T dot type equal to real. Now, for the production L going to I D and L going to epsilon or semicolon L we have extra work to do. For the production L going to I D we have insert symbol table. So, the routine is called with the name of the identifier and the type of the identifier. So, the L dot type is the inherited attribute of the left hand side non terminal it is already available. So, we parse I D dot name and L dot type. So, this is inserted into the symbol table and the associate in those appropriate fields are filled and after that the attributes of L prime are computed just before L prime. So, L prime dot type inherited attribute equal to L dot type again from the left hand side of the production. So, remember we can compute the inherited attributes of the symbol just before it, but the synthesized attributes are available from below. So, there is no question of computing the synthesized attributes of L prime in and providing rules for it in this production. So, L prime going to epsilon has no rules associated with it because there is no computation necessary actually it ends as a declaration list. So, there is nothing to do whereas, if we have a semicolon L then comma L after the comma we initialize the inherited attribute of L. So, L dot type equal to L prime dot type and then process the string generated by L. So, the attribute computation rules are actually interspersed with the production symbol among the production symbols and the order in which we execute these attribute computation rules is now constrained. So, let me show you what happens in this case. Here is a very simple sentence int x semicolon and here is the parse tree for it. So, declaration going to delist dollar and delist going to d d prime goes to epsilon. So, here this d goes to t L and then t goes to int L goes to you know i d and then comma which is you know which is not in the picture because it is not very important and then L prime and L prime goes to epsilon. So, here if you observe the boxes these are the attribute computation rules and the action code actually. So, insert symptom is the action code whereas, the others L dot type equal to t dot type etcetera are the attribute computation rules. So, what we really have done is to introduce a pseudo edge and a pseudo node corresponding to the attribute computation rule and then you know the order in which the rules have to be attribute computation rules have to be executed is shown in red. So, if you look at this position it has been this L dot type equal to t dot type has been inserted between t and L. So, the production is d going to t L and the node is inserted right here. Let us look at the grammar to check whether it is correct or wrong it is indeed correct. So, d going to t then the computation rule and followed by L. So, if we simply do the same d f visit on this augmented parse tree. So, for example, even for this the attribute computation rule is between i d and L prime L going to i d semicolon you know L prime. So, let us see that here for example, so i d and L prime. So, this part you know we have a semantic action here right. So, there is no semicolon semicolon is here. So, i d and L prime and the semantic action is in between this semantic action has been made into a pseudo node and a pseudo edge is attached to its parent L. So, if we consider this augmented parse tree and simply conduct a d f visit and execute the attribute computation rules in that order automatically the attribute computations happen properly. So, we can see that you know declaration then d list and then d then t then int. So, this attribute gets computed now when we return from int and once we do that we go up again to d and then come down to this right. So, after that we t and then int and then we need to visit t dot type equal to integer. So, we compute the attribute of t and that produces type information here that would be int and now we go up to d and then come down to this action. So, we do L dot type equal to t dot type that gets the attribute of L ready for the next visit then we go up again and then come down to L. So, this attribute is already ready now we go to i d and then we go up again and then we come down to this action. Now, we can see that x and L dot type are both ready. So, this semantic action of inserting into the symbol table can be carried out properly then we initialize L prime dot type equal to L dot type. So, this attribute gets ready now we go back and then come down to L prime finally you know this epsilon is visited we go up back all the way then visit d prime epsilon then dollar and that is the end of the visit. So, in the case of LATG the parse tree is first constructed without the actions and once we construct the parse tree looking at the productions we insert dummy nodes for the action segments attribute computation rules and action segments and once that is ready we do a d f visit on the parse tree execute the semantic actions as necessary and that gets the attributes evaluated. So, the SATG for the same language of declarations is also shown here for comparison. So, d going to T L is the same. So, the production is the same and we just you know call a function this thing the production a procedure called patch type. I will tell you why that is needed after we go down a little bit. So, T to int the rule of computation is very simple T dot type equal to integer T to float is T dot type equal to real. Now, we have L to I D and then we have L to L comma I D since this is bottom up parsing there is no problem with left recursion here. For L to I D we insert the name into the symbol table and then we create a list of names in L dot name list and that carries this I D as well. Why should we do this? If the same is done here as well in L to comma I D we insert the name into the symbol table and append the new name to the L to dot name list and that is sent out as L 1 dot name list. The problem is when we have come up to this point we have parsed the type information in T and the attribute is computed for T as well. We have parsed the name list which is L and if we do not carry the list of names in L as an attribute of L, how do we actually attach the type information to each of the names which L produces? That is the question. So, now we have a list of names available as a synthesized attribute here. The type information is available as a synthesized attribute of T. So, we can execute an action patch type T dot type comma L dot name list. So, which traverses the name list and since these are all entries in the symbol table it enters the type information for each name into the symbol table and goes to the next name. So, in this manner the symbol table can be constructed for this declaration list using SATGs. So, SATGs can be used with the arc and I showed you several examples already. So, these are all translated to C code automatically and the arc specification. Automatically this becomes a program which can be generated rather this becomes a attribute grammar which can be used processing by arc. So far we just saw the grammar you know and then I told you that it is possible to evaluate the attributes over the parse tree. It is also possible to integrate the LATG into a recursive descent parser. So, that parsing and attribute evaluation happen in a simultaneous manner just like the attribute computations can happen hand in hand along with LR parsing. So, the same can be done in recursive descent parsing as well. So, let us look at the recursive descent parser and the attribute computations for the same LATG that we saw a few minutes ago. First production is declaration going to delist dollar. So, the function is void declaration, the body is called delist and then the next one is dollar that is the end of file. So, if my token dot token equal to EOF then return otherwise obviously it is an error. So, this is the function for declaration on terminal. The next production is delist going to DD prime. So, again the function is called delist it returns nothing, the body consist of a call to D and another call to D prime that is it. The third production is D going to TL and there is a semantic action in between L dot type equal to T dot type. So, the function produced is void D it is not returning any synthesized attribute. So, that is why it is void. So, void type is equal to a call to T. So, T produces a synthesized attribute and that is actually returned into the variable type. So, now we pass that type information into L as a parameter which is an inherited attribute. So, here L dot type is inherited and T dot type is synthesized. So, T is returning a value and that is given to the variable through this assignment. L dot type is an inherited attribute and that becomes an incoming parameter to the function L. So, that is how this you know production is structured in recursive descent. So, consider T going to int or T going to float. So, this is quite straight forward or type T. So, if my token is int then get token and return integer. So, there is this integer is the type information and remember T is returning a type information called war type. So, see this here and that is the int part. If the token happened to be float then we get the next token and return real. So, war type you know also has real in it. So, real value is return, real type is returned as the type of T otherwise error. So, when we have a synthesized attribute which is going out it is actually the result of the function result type of the function and whenever there is an inherited attribute coming into a non-terminal and the corresponding function will really have an incoming attribute. So, we will now see L. So, that will make it very clear. So, L going to I D and then there is an action and finally L prime. So, this is the production. So, L does not return any result it only takes an inherited attribute. So, there is a parameter corresponding to it war type type and then the body simply says if my token dot token equal to I D. So, that is the parsing part. Now, the action is introduced into the recursive division parser insert symbol table. So, I D dot name is nothing but my token dot value and L dot type is nothing but the incoming parameter type. So, this function is executed then we get the next token and call L prime. So, L prime dot type equal to L dot type is the initialization just before L. So, that code gets you know executed because this type information is already available as the incoming you know parameter. There is no need for another computation here. So, that is automatically available and used here otherwise error. The next production is L prime going to empty or L prime going to comma L. So, after the comma there is a an attribute computation. So, the function becomes void L prime. So, there is no return of any synthesized attribute from here. The incoming attribute incoming parameter is the inherited attribute war type type. So, if the my token dot token is comma. So, that is this production is applicable then get token and call L with the inherited attribute type. So, type is available as an incoming parameter here. So, else so if the token is not comma then the production applied is L prime going to empty. So, we have a null statement semicolon here. The last statement is D prime going to empty or rather last production D prime going to semicolon D list and there are no you know attribute computations here. So, it is just a parser part that is present in the recursive descent parser function D prime. So, if my token dot token equal to semicolon then get token call D list otherwise it is the empty part. So, null statement semicolon. So, this is how the recursive descent parser embeds you know the semantic actions of an LATG. The most important part to remember here is the inherited attributes are passed as incoming parameters to a function and corresponding to the non-terminal. And the synthesized attributes are the outgoing results of the function corresponding to that non-terminal. So, now let us look at the SATG version of the expression evaluation grammar. So, it is the same you know grammar with the special production E going to let I D equal to E in E. Let us see how the SATG is written for such a grammar. There is a minor problem with the grammar and it has to be rewritten. So, let us go through this in a production by production fashion. So, S to E does not pose problems. So, the semantic evaluation is straight forward S dot val equal to E dot val. E to E plus T obviously does not pose any problems. So, E 1 dot val equal to E 2 dot val plus T dot val. E 2 T is just a copy. So, there is nothing wrong here as well. Now, when we come to the production you know E going to let I D equal to E in E, we really cannot process it as it is. The reason is we are using bottom up parsing strategy in the for the SATG. So, if we attach a semantic action at the end of the production that is at this point, then we would have already parsed the second this is first E and the second E and the association of I D to E cannot be made available to the second E at all. Therefore, somehow we must attach an action in the middle somewhere to introduce the association of I D to E into a symbol table and the symbol table being a global entity this E will also know about it. So, to do that we are going to break this production E going to let I D equal to E in E into several productions. The first production would be E to L B. So, the second the first part L produces let I D equal to E and the second part B produces in E. So, at the highest level E dot val is nothing but B dot val. So, that is a fairly simple attribute computation, but when we come to the production L going to let I D equal to E. So, there are several points to be noted here. The first is we are going to use the nesting level as the scope of a particular name. So, the scope is first initialized to 0. So, when you know there is no other nesting possible the scope value is 0 and whenever we have a new association let I D equal to E we are going to increment the scope. So, scope was initialized to 0 this is the comment. Now, as soon as we parse let I D equal to E we know that there is a new scope generated. So, we increment the scope insert the name I D dot name into the symbol table with the new scope and the value is that is to be associated with I D dot name is E dot val at that particular level or scope. So, now scope is a global variable it is being manipulated in this production so this is the reason it becomes an SATG. In the production B going to in E B dot val is obviously E dot val. So, that is not an issue at all and now after this you know E now has available the scope symbol table. So, we will see the use of E this symbol table somewhere down here in the production F going to I D, but at this point of time let us assume that E has been parsed and evaluated. So, E has produced a value using the association of I D to E. So, B dot val is E dot val now that we have actually exited the scope. So, let I D equal to E in E. So, this is the end of the scope for the name I D name associated with I D. So, we must do two things the name that was introduced at that scope has to be deleted and the scope value has to be reduced by 1. So, we do both these so we do both these and delete entries with the scope as parameter removes all the entries from the symbol table with the scope and scope minus minus reduces the scope by 1. So, we are back to the enclosing nesting level and thereby we are free to introduce any other entries at new levels from now on. T to T star F is very simple. So, T 1 dot val equal to T 2 dot val plus F dot val similarly T 2 F and F 2 E and F 2 number. So, when we go to F 2 I D the value of F is obtained by looking at the symbol table with the present scope. Scope being a global symbol whatever value it holds is the scope at which we entered I D equal to E. So, then the same name can be obtained using the scope entry and that is returned as synthesized attribute of F. So, this is how the same you know expression grammar with L A G can be modified into S A G, but this breaking of productions is essential here. This is actually a very basic principle as we will see when we do semantic analysis of if then else statements and while do statements we require a similar breaking up of productions to make sure that semantic checks for these expressions are inserted at appropriate points. Now, let us move on let us look at semantic analysis of declarations. We saw in the you know the declarations in a slightly diluted fashion so far. So, in other words the only thing we saw was a simple declaration of the form D going to T L where T is a type and L is a list of names, but suppose we add arrays into it and to make it more interesting we also permit arrays to be declared in more than one way. For example, arrays can be declared as in T A with 10, 20, 30 as the three ranges for the three dimensions or we may permit arrays to be declared as separately separated by these square brackets 25, 35. So, both these varieties are permitted in this grammar let us see how it does it. So, L is a list of names or simple names or array names. So, L is either ID array or ID array comma L. So, in other words it is either a single name or a list of names. ID array is either a simple ID or ID followed by the dimension list that is something like this. Dimension list produces number comma dime list. So, it produces a list of these dimensions. So, this type of declaration is taken care of ID bracket, dime list bracket and ID BR dime list. So, BR dime list says look at you know have one bracket then the number followed by another bracket or a similar structure of bracket num bracket followed by bracket BR dime list. So, the declarations of this type are taken care of by this production and declarations of this type are taken care of by such productions. So, let us see how to write LATGs for such declarations. So, there are a couple of points that we need to note here the grammar that I presented here this is obviously not L L 1. So, it is a very easy to see you know number is common between these two, ID is common between these three, ID error is common between these and so on and num bracket num is common between these two. So, all this requires some factoring and so on. So, to make it L L 1. So, we assume that the parse tree is available and that attribute evaluation is parsed over the parse tree by augmenting it with dummy symbols for actions and so on and so forth. So, modifications to the CFG to make it really L L 1 I showed you you know a grammar which can be made into a recursive descent parser. So, that is an example of an L L 1 grammar along with its semantic actions. So, doing making this grammar into L L 1 and changing the semantic actions appropriately or left as exercises. Now, attributes and their rules of computation for the productions 1 to 4 are as before and we ignore them. So, up to this point the grammar has not changed. So, I am not going to repeat the semantic actions for these four they are as before. So, we provide the attribute grammar only for the productions 5 to 7 the attribute grammar for 8 is very similar to that of 7. So, 8 is this type of declaration. So, that is here processing this is very similar to the processing that we do for 6 and 7 there is a absolutely nothing different. So, we will do only for 6 and 7 and leave 8 for the exercises. Finally, handling constant declarations is similar to that of handling variable declarations. So, variable declarations have a type attached to it whereas, constant declarations let us say we have int a equal to 5. So, we can treat this you know as a variable a initialize to value 5. So, the initial value can be stored along with the variable in the symbol table and use later for code generation. Of course, some languages also have constant declarations themselves like in Pascal and C plus plus and so on constant declarations are available. So, in such cases the identifier or the name associated with the constant declaration is also entered into the symbol table just like a variable, but a flag indicates that it is a constant, but otherwise processing constant declarations is not very different from that of handling variables. Then the other thing we must note is each identifier has several pieces of information attached to it. So, this is the identifier type information record and all this information must be available in the symbol table also corresponding to this particular name. So, the name of the identifier is available then the type of the identifier then the element type of the identifier and a pointer to the various dimensions in case that identifier is n r a. So, let us see what these fields are. The type field can take two values either simple or r a these are the user defined scalar names. So, type is simple for non-array names and type is r a for array declarations and the fields l e type and Daimler's pointer are relevant only for the arrays. So, what does l e type do? It stores the value either integer or real or error type and this is the type of a simple identifier or the type of an array element. So, for example, if you have float my array 5 out 12 15 float is the type of the array element and these are the various dimensions. Daimler's pointer points to a list of ranges of the dimensions of an array. So, c type array declarations are assumed here. So, if you take this example of float my array 5 out 12 15. So, now float as I already told you is the l e type and then you know the type of this entire name my array is array. So, that is the that is what is filled here name of the array is my array and then there are three dimensions here first dimension second dimension and third dimension. So, 5 is the number of elements in the first dimension 12 is the number of elements in the second dimension and 15 is the number of elements in the third dimension. So, what we really do is make a list 5 comma 12 comma 15 and make Daimler's pointer point to such a list and that is what is hanging here. So, this points to a list 5 12 comma 15 is for this particular example. Why do we have to do this? The point is when we actually do some code generation or when we do for example, offset computation we will need to find the size of the array or the size of the slice etcetera. So, if for example, if the language permits assigning slices of arrays to other arrays or other slices then we may require the size of a slice. So, if we say simply my list my array 3 then what is the size of a single element of this my array 3 slice that would be actually 12 into 15. Whereas, if we consider another slice my array of 3 comma 4. So, that would be a small array of each element would be an array of size 15 whereas, if we consider the whole array then the size is 5 into 12 into 15 that is 900 elements. So, depending on what we require we may have to traverse this list and produce the number of elements of the appropriate slice. So, this is the reason why we require such elaborate information in the identify type information record. So, we will stop here and continue with semantic analysis in the next lecture. Thank you.