 Welcome to part 5 of the lecture series on syntax analysis. So far, we have covered you know basics of syntax analysis and top down parsing, recursive test parsing and a bit of bottom up parsing. So, let us continue with L R parsing techniques today. So, as I explained in the previous lecture, L R parsing is a method of bottom up parsing and it stands for left to right scanning with right most derivation in reverse and k is the number of look ahead tokens of course, L R 0 and L R 1 are of great interest worst in a practical sense. L R parsers of course, are important because they can be generated automatically using parser generators and they are a subset of context free L R grammars or a subset of context free grammars for which such parsers can be constructed. So, it is easy to write L R grammars and that is the reason why they are very popular today. So, let us look at the parser generator. The generator is a very simple device, it takes grammar as input and generates a parsing table called the L R parsing table and the parser table is fit into another box containing a stack and a driver routine this is this whole setup is the parser. So, here for example, so stack driver routine and parsing table together make the parser it takes the program as input and delivers as output possibly a syntax tree or something else. So, let us look at the parser operation. So, to understand it we need to know what exactly is a configuration of an L R parser. The configuration has two parts one is the stack the other is the unexpended or unused input and to begin with the stack has only the start symbol or the initial state of the parser and the unexpended input has the entire input you know terminated with a an end of file or dollar mark. So, in the somewhere in the middle a configuration will consist of a number of states intermixed with grammar symbols and then the rest of the input the parsing table is a little more complex it has two parts the action part and the go to part. The action part has four types of entries shift reduce accept and error the go to table is used to provide the next state information which is actually necessary after a reduce move. So, here is the parsing algorithm. So, before that let us look at the parser table to understand the action and go to entries a little better. So, the parser table is indexed by the state numbers on one side and on the other side for the action table it is indexed by the tokens present in the input and for the go to table it is indexed by the non terminals present in the grammar. Entries such as S 2 indicate that there is a shift operation and the next state that we parser enters is 2. Similarly, S 3 indicates shift and the next state being 3 and so on entries such as R 3 indicate that the next move is a reduction move and the production number used is number 3. So, all the productions in the grammar are numbered sequentially and the production number tells you which production is to be used for reduction in this case R 3 is the production S 2 C and we use the production S 2 C to reduction perform the reduction here. After the reduction is over the during reduction some of the stack symbols are removed and we expose a state. So, after the state is exposed the non terminal on the left hand side of the production used and the state combination looked up in the go to table tells us which next state we need to go to. So, now let us look at the parsing algorithm which basically explains the actions I was describing right now in more detail the initial configuration of course, the stack has state 0 and the input is the entire input and let us say a is the next input symbol. So, now there is a repeat until loop which goes on forever unless it is interrupted by either error or accept action. So, let S be the top of stack state and let A be the next input symbol. So, now as I said the parser looks up the action part if the action part says shift P then it pushes A and P on to the stack in that order and the input pointer is advanced to pick up the next input symbol. If the action part says reduce by a production. So, this is the number that I indicated in the parser table it pops of 2 into alpha symbols of the stack the reason we are popping 2 times alpha is the length of the length of alpha is the number of symbols on the right hand side of the production and we have the state symbols as well intermixed therefore, we need to remove 2 alpha symbols the state S prime is exposed. Now, the left hand side A here and the go to of S prime comma A are pushed on to the stack. So, this is the way the go to state go to table is used if the action is accept then it gets out of the infinite loop otherwise there is an error and the error recovery routine is called. So, let us look at an example to understand all the operations that I described just now the stack contains 0 the state number and input is A C B B A C the parser says shift. So, this is the parsing table that we have in mind and these are the productions that we are using I have actually written down all the productions which are necessary for reduction etcetera in this slide itself. So, that we do not have to go back and forth the entry 0 comma A would tell us that is a S 2 action. So, just for this let us look up 0 and A it indeed says S 2. So, the symbol A is shifted on to the stack along with the state 2. So, this is the state right now the parser is in. So, 2 and C the table would say S 3. So, we shift C and the next state 3 also on to the stack. So, the combination of 3 and B is reduced by production 3 that is S 2 C. So, when we reduce we take out the 3 and the C this is the twice the length of the right hand side. So, there is one symbol here. So, we are taking out the state symbol and the C itself the state number 2 is exposed. So, we look up go to of 2 and the non-terminal S here this will give us 8 in the table. So, the non-terminal S and the state 8 the new state are pushed on to the stack. The combination of 8 and B says it is shift and the next state is 10. So, B and 10 go on to the stack 10 and B say shift 6. So, B and 6 go on to the stack again it is shift time shift 7. So, A and 7 also go on to the stack now it is time for reduction by the production A to B A. So, there are 2 symbols here. So, we take out 4 symbols from the stack. So, 7 A 6 and B. So, now state 10 is exposed and that on the non-terminal A which is the left hand side of the production gives us state 11. So, we push non-terminal A and state 11 on to the stack. So, here please observe that you know this is the process of handle pruning. So, we saw that in the shift reduce parsing algorithm as well. So, whenever there is a right hand side that is always at the top. So, we remove the right hand side of the production and push the non-terminal on to the stack. But as I explained there is a DFA whose states are being pushed on to the stack as well. So, this is the state of the parser and the DFA the state on the top tells us whether it is time for reduction or a shift. So, for example, again 11 and C tell us that it is a reduction by 6. So, B to B A take out 4 symbols. So, 11 A 10 and B. So, state 8 again 8 and B give us state 9. So, state 9 goes on to the stack along with B. B and C again is a reduction. So, this time again remove 4 symbols. So, state 2 is exposed 2 and A give us 4. So, A and 4 go on to the stack 4 and C tell us it is time for shift. So, C 3 goes on to the stack again we reduce by S to C that gives us 4 S 5 again it is reduced by production number 2. So, that give exposes state 0 and state 0 and S gives us state 1. So, 0 S 1 goes on to the stack and finally, 1 and all are tell us that it is the start production plus accept action. So, the whole process ends. So, there is another example the familiar e to e plus d e to t e to t star f t to f f 2 parenthesis e parenthesis f 2 i d and s to e. So, here is the parsing table for this particular grammar observe that this is the start symbol S. Here is a simple example let us quickly run through it to see what it says. So, here the string is i d plus i d star dollar observe that this grammar is unambiguous. So, it can be passed exactly in one way. So, on state 0 on i d is a shift action then it is a reduced action and then again it is a reduced action. So, finally, one more reduction and we get 0 even plus i d star i d dollar. So, there are two more shifts here and this is the stack at this point now there is a reduced by 6. So, production number 6 that is f 2 i d. So, we reduce and push 6 f 3 on to the stack then again there are 2 shifts and there is a reduced by 6 5 and dollar is a reduced. So, we reduce by f 2 i d and the state 7 is exposed. So, g of go to of 7 comma f is 10. So, we push f 10 on to the stack then there is time for reduction again another reduction and finally, accept. So, the l r parser is nothing but a shift reduced parser you know the actions are exactly the same, but at the time of reduction and push we also push the state numbers on to the stack along with the symbols the either the terminal or a non terminal symbols. So, now it is time to understand how to build the parser table because we have seen the operation of the parser given the table to do that we must understand what exactly makes a grammar an l r grammar. So, let us under and let us consider a right most derivation s derives in 0 or many steps phi b t which in turn derives phi beta t. In other words in the last step b to beta is the production which has been applied. So, the basic idea in a l r grammar is we should be able to you know determine the handle uniquely. So, beta is the handle here b to beta is the production. So, we should be able to look at the first k symbols of this t in any derivation of the grammar and determine which production was applied at that particular point. So, a grammar is said to be l r k if for any input string at each step of any right most derivation the handle beta can be detected by examining the string phi beta and scanning at most first k symbols of the unused input string t. So, this phi beta is on the stack. So, if we go back one step. So, this is the stack content and this is exactly what we mean by phi beta of course, the state numbers are extra. So, the finite state automaton whose states are being tracked by the stack gives us a method of examining the string phi beta and by looking at the first k symbols of t and looking at the top of stack state the l r parser will be able to determine which production was used at this point if it is a reduction. Otherwise it will determine that it is a shift action. So, here is an example the grammar is ambiguous s to e e to e plus c or e star e or i d where we want to show that this is not l r 2 that is with even with k look ahead of 2 we will not be able to determine the handle uniquely. There are two derivations that I have shown here s derives e e derives e plus e e plus e derives e plus e star e then we derive e plus e star i d e plus i d star i d and i d plus i d star i d. So, this is a right most derivation another right most derivation s to e e to e star e. So, instead of e plus e we have e star e then e star i d then this becomes e plus e star i d e plus i d star i d and i d plus i d star i d. So, the same string i d plus i d star i d has two right most derivations because this is ambiguous, but the point we want to you know emphasize here is that we cannot determine the handle uniquely. So, consider this step 6 prime and the step 6. So, the handle here is i d the handle here is also i d the same symbol first symbol and the production used of course is e to i d. So, we have e plus i d star i d as the sentential form in one step backward here also i d e plus i d star i d is the sentential form when we traverse one step backward the reduction for from i d to e uses this sentential form. The next step 5 prime and the corresponding step 5 again the handle is i d right and the production which has been used is again e to i d and in both cases we get the sentential form e plus e star i d e plus e star i d which are identical. Then the difference surfaces for the step 4 and the step 4 prime the handle in this derivation is i d whereas, the handle in this derivation is e plus e. So, the look ahead in this case you know the unexpanded input is star i d here also the unexpanded input is star i d. So, if we look at two symbols we would be looking at star i d in both cases. So, that is the same look ahead string, but by looking at the string we are not able to uniquely determine whether the l r parser must take e plus e as the handle or e plus e as the handle or i d as the handle. So, this is precisely the ambiguity and the grammar happens to be non l r 2 in this case. So, the handle cannot be determined using the look ahead and of course, the derivation. So, because the stack content is identical. So, you can see that e plus e is the stack in both cases the unexpanded input is star i d in both cases. So, by looking at the stack we are not able to say that the handle is e plus e here and i d here. So, therefore, the grammar is not l r 2. Now, let us move on and understand how to build the automaton deterministic finite state automaton which tracks the parser states and tells us when to shift and when to reduce. To do that we must go through some terminology. So, the first you know term that we need to define is a viable prefix of a sentential form. So, a viable prefix of a sentential form phi beta t where beta denotes the handle is any prefix of phi beta. So, in other words a viable prefix cannot contain symbols to the right of the handle. So, let us take an example s the grammar is s 2 e hash e going to e plus t or e minus t or t, t going to i d or e parenthesis e parenthesis. So, let us look at a rightmost derivation s 2 e hash now we apply the production e to e plus e. So, e plus t. So, we get e plus t hash now we for t we apply t to parenthesis e parenthesis. So, we get e plus parenthesis e parenthesis hash now e to t is applied. So, e plus t hash now t to i d is applied. So, we get t e plus i d hash in this sentential form the handle at this point is i d because we applied the production t to i d to get this sentential form. So, the any prefix of e plus parenthesis i d is a viable prefix. So, e e plus e plus parenthesis and e plus parenthesis i d are all viable prefixes of this particular sentential form. So, the property of a viable prefix is given a viable prefix we should be able to add appropriate symbols to the right side of to the right end of the viable prefix to get a right sentential form. So, for example, here you know we are able to if you take this viable prefix then you know we can add a right parenthesis here and a hash here to get the right sentential form. So, in this case for example, if you look at the previous you know if you look at this particular e plus parenthesis t parenthesis hash and so on. So, make sure once we derived all the terminal symbols from t we add whatever t derives and then we retain this e plus parenthesis let us say t derives i d. So, if we add i d parenthesis and hash to e plus parenthesis we get a right sentential form. So, similarly you can consider any one of these say e plus or something like that. So, for e plus we add whatever t hash derives. So, that would include parenthesis i d parenthesis hash and we get a right sentential form. So, this is the characteristic of a viable prefix viable prefixes characterize the prefixes of sentential form that can occur on the stack of an l r parser. So, when we go from the terminal symbol terminal string to the start symbol lot of reductions take place and during these reductions the stack contains parts of the sentential forms and the viable prefixes are exactly those you know parts which lie on the stack of an l r parser. A major theorem in l r parsing theory is that the set of viable prefixes of all the right sentential forms of a grammar the set forms a regular language. So, the DFA of this language can detect handles during l r parsing. So, we will be seeing that very soon. The point is the DFA reaches a so called reduction state and signals that the prefix viable prefix cannot grow further. So, that means there is a reduction that is necessary at this point I will show you what exactly is a reduction state after this slide. This sort of a DFA can be constructed by the compiler using the grammar and we are going to discuss that procedure a little later. All l r parsers have such a DFA incorporated in them this is the heart of an l r parser really. So, to do that we construct an augmented grammar and if S is the start symbol of G then G prime contains all the productions of G along with a new production S prime going to S. The reason we do that is there could be productions from S with S on the right hand side as well, but we want to make sure that the start symbol is unique and we want to halt the parser as soon as S prime appears on the stack. So, to do that we add an extra start symbol and make it an augmented grammar. So, here is an example of a DFA for this particular grammar this is the l r 0 DFA for this particular grammar. So, let us understand what exactly this is there are some states which are in greenish blue and there are some other states which are in violet. The states which are marked in violet also have a production associated with them and these are the reduction states. So, 5, 8, 3, 2, 9 and 11 they are all reduction states and the other states 0, 1, 6, 7, 10 and 4 which are in greenish blue color are all shift states. So, when the parser enters parser DFA enters one of these reduction states then a reduction by this particular production is bound to happen. Whereas, if it is in any other state then the upon an input symbol a shift would happen and it would go to an appropriate state. For example, from the state 6 on the receipt of input I D it goes to state 3 whereas, in state 4 on the input parenthesis it remains in the same state 4 and it can possibly you know go to state 3 on I D. So, this is the l r parser this is the parser DFA here is a start state. So, there is only one difference between an actual DFA as we defined in lexical analysis and the DFA which I have written here the difference is this particular DFA there is no explicitly defined final state. In fact, they are all final states all the states are final states the reason being in this DFA it from the start state it does not matter which path you take that would form a viable prefix. So, for example, 0 to 1 form I know we can we have e. So, e is a viable prefix 0, 1, 6, e plus is a viable prefix e plus t is also a well prefix and so on and so forth. So, then to construct this DFA there must be some algorithm written out and what we are going to do now is we provide a method of constructing this DFA using what are known as items the procedure is quite simple and then state some results which link this set you know the DFA constructed using items to the DFA which recognizes viable prefixes. So, let us define items a finite set of items is associated with each state of a DFA remember we are now defining possibly some other DFA we will provide an algorithm to construct this DFA using these items and then link this DFA and mention results that say that this is the DFA which recognizes viable prefixes. So, what exactly is an item an item is very simple you know take a production put a dot anywhere on the right hand side of the production and you get an item. So, for example, if you consider the production e to e plus t then you can put a dot just before e plus t just after the e just after the plus and after the t. So, you can actually form four items from the single production e to e plus t. So, the general form of a production is a going to alpha 1 dot alpha 2 with either alpha 1 or alpha 2 or both being epsilon. So, items are denoted by 2 square actually they are enclosed in 2 square brackets. Now, a little more terminology an item a going to alpha 1 dot alpha 2 is said to be valid for some viable prefix phi alpha 1. So, now we are trying to link an item and a viable prefix. So, we have a viable prefix phi alpha 1 please see that the alpha 1 there and alpha 1 in the production are the same. Secondly, observe that alpha 1 is the portion of the right hand side of the production just before the dot. So, it says a going to alpha 1 dot alpha 2 is valid for some viable prefix phi alpha 1 if and only if there is some right most derivation. So, what is the derivation S derives phi a t and then you apply the production a going to alpha 1 alpha 2 at this point. So, you get phi alpha 1 alpha 2 t. So, if this is the case if we are able to get the production a going to alpha 1 alpha 2 applied and the viable prefix at this point is indeed phi alpha 1. Then we say that the item a going to alpha 1 dot alpha 2 is valid for this particular viable prefix. You can also observe here that the item a going to dot alpha 1 alpha 2 is actually valid for the viable prefix phi. Why the same right post derivation can be used to show that you know you have the sentential form phi alpha 1 phi alpha phi alpha 1 alpha 2 t. So, alpha 1 alpha 2 is what you have here and the item would be a going to dot alpha 1 alpha 2 and the viable prefix is phi. So, a going to dot alpha 1 alpha 2 is valid for the viable prefix phi. Similarly, if you consider the item a going to alpha 1 alpha 2 dot you know then the viable prefix for which it is valid is phi alpha 1 alpha 2 that is very trivial from the same right most derivation again. So, you can consider this entire thing as the viable prefix and then after the dot there is nothing here. So, just the t in the right sentential form. So, a going to alpha 1 alpha 1 alpha 2 dot will be valid for the viable prefix phi alpha 1 alpha 2. So, there may be several items valid for a single viable prefix. Let us see how consider the derivations given below s derives e sharp and then e that derives e minus t sharp then s derives e sharp which in turn derives e minus t sharp and which again in turn derives e minus i d sharp. Finally, we get s e sharp e minus t sharp and e minus parenthesis e parenthesis sharp. So, three derivations now the viable prefix that we are considering is e minus. So, in this sentential form again e minus is a viable prefix in this sentential form also e minus is a viable prefix and in this sentential form also e minus is a viable prefix. The let us consider the three items the statement says all these three items are valid for the viable prefix e minus. So, let us take the first item e going to e minus dot t. So, e minus is the alpha part t is the alpha 1 part and t is the alpha 2 part. So, we take the sequence a going to e hash going to e minus t hash. So, e minus is our viable prefix that is the that corresponds to our alpha 1 and then t is alpha 2. So, this shows that this item is valid for e minus t going to dot i d. So, we consider this. So, again e minus is our viable prefix. So, and alpha 1 is null here and i d is alpha 2. So, e minus i d is here. So, in this production in this application in this derivation we see that e minus is the viable prefix and therefore, t 2 i d is valid at this point i d is indeed the handle as well. Third is t going to dot e dot parenthesis e parenthesis. So, we again take this derivation. So, we have alpha 2 which is parenthesis e parenthesis e minus is our viable prefix here. So, this derivation shows that this is indeed valid for the viable prefix e minus. So, there may be many items valid for a single viable prefix. Then what does an item indicate you know in a grammar and a derivation sequence an item indicates how much of a production has already been seen and how much really remains to be seen. So, if you consider the production e going to e minus dot t it says we have already seen the string of course, I am assuming that you know this item is valid for some viable prefix. So, in the derivation we would have already seen some part of the string input string derived from e minus and the input string derivable from t is yet to be seen. So, this is the interpretation. So, before the dot it indicates the past and after the dot it indicates the future and each state of the L R 0 D F A contains only those items that are valid for the same set of viable prefixes. So, you know I have still not told you how to construct the state of the D F A using these items, but a state will contain many of these items. The point is all the items here will be valid for the same set of viable prefixes. So, all the items in state 7 are valid for the viable prefixes e minus and parenthesis e minus. So, let us look at that. So, e minus this is the state number 7. So, you should observe that this is the path we take from the initial state 0 from 0 we go to 1 and then to 7 and the labels we accumulate as e minus. So, similarly let us see how to go to state 7 using some other path. So, from 0 we can go to 1 then we go to 6 and then we go to state 10 and finally, we go to state 7. So, we have e plus you know e plus and sorry e minus is what brings you here and we cannot go to from state 6 we cannot go to state 10 directly we need to go to state number 4 and then go to state number 10 the arc is in the reverse direction. So, we get e plus and then parenthesis and then you know this side is again another e and finally, a minus. So, if you accumulate the labels on a path which takes you to any particular state. So, that is very significant and that is what this is trying to say. So, all the items in state 7 are valid for the viable prefixes e minus and parenthesis e minus and many more of course. Similarly, all the states in items in state 4 are valid for the viable prefix parenthesis and many more of course. So, the basic idea is what I was trying to describe just now the set of all viable prefixes for which the items in a state S are valid is the set of strings that can take us from the state 0 to the state S. So, here so find out all the strings which can take you to state 7 those are the viable prefixes for the you know for which the items in state 7 are valid. So, similarly if you take state 10 all the strings which can take you from 0 to 10 are the viable prefixes for which the items in state 10 are valid. So, that is about the validity of items constructing L R 0 D F A using sets of items is very simple. So, let us look at that procedure now and then look at the relevance of this to our problem. So, there is an operation called closure. So, first let me explain the closure operation with respect to these examples and then look at the algorithm itself. So, let us say we are given an item S going to dot E hash closure says look at the symbol after the dot if that is a non-terminal then add all the productions of that particular non-terminal with the dot on the and put a dot on the left most in the left most position. So, for E there are three productions E to E plus T E to E minus T and E to T. So, all these three items have been added with the dot in the left most position. Now, do this for the items that we just now added again we have the symbol E here and there is nothing more to add for E, but this gives us a new symbol T. So, add all the productions and the items associated with it to the state. So, T going to parenthesis E parenthesis and T going to I D with a dot in the left most position. So, these orange color ones are the items which we added because of the closure. So, if here these are the items which are given to us, but it so happens that the symbol after the dot is a terminal symbol. Obviously, there are no productions corresponding to terminal symbols. So, we cannot expand this state further using closure state number 7 we have E going to E minus dot T T is a non-terminal. So, we can add two more items for this closure I know and the E to T dot adds nothing. So, the closure process is very simple items set closure I is the set of items which are given to you while more items cannot be added to I for each item A to alpha dot B beta in I. So, observe that we are looking at the non-terminal B after the dot and for each production B to gamma we add the item B going to dot gamma if it is not already present in the item set to I. So, this is what we did here and when we considered these two is they give us the same items. So, we did not add them a second time. So, this is the closure operation then there is another operation called go to. So, again let us consider the blue items which are in state 0 all the three blue items have the non-terminal E after the dot just after the dot the others have different symbols of course, the go to set computation tries to advance the dot by one position. So, whenever the symbol after the dot is exactly the same we take all the items with that symbol on the after the dot in the same state advance the dot by one position that gives us a couple of items. For example, this gives us S going to E dot hash this gives us S going to E dot plus t and this gives us E going to E dot minus t. So, we really add them into another state if the state if no other state has these items we create a new state and add these items to that particular state. So, here again you know we check whether it is possible to do a closure operation. So, in this case all the symbols after the dot are terminal symbols. So, there are no more items that we can add by a closure operation. So, now for this again let us form the go to state go to set or go to state. So, just before the you know minus there is a dot here. So, let us and this is the only item which has a minus after the dot. So, we consider only this particular item advance the dot by one position. So, we get the item E going to E minus dot t add it to a new state. Now, the symbol after the dot is a non terminal. So, add the two items which can be derived by the closure operation to this state. So, the go to set computation is a very simple procedure go to of i comma x i is a set of items x is a grammar symbol either a terminal or a non terminal. In this case it was a non terminal and in this case it is a terminal symbol. The new state or item set we get is i prime a to it contains all though this set or item set contains a to alpha x dot beta. So, what we had you know was a going to alpha dot x beta. So, now we advance the dot by one position. So, alpha dot x beta was already in i. So, we form the new item alpha x dot beta and put it into a new state i prime. If i prime was already there we do not form a new state we just use the same you know we do not do anything more for that particular go to set. Now, form a closure of i prime and return it as the result. So, this is what the go to set computation is. So, now, the what is the intuition behind closure and go to why should we do all this. If an item a going to alpha dot b beta is in a particular state or item set then sometime in the future we expect to see in the input a string derivable by from b delta that we already know. So, the implication is if the string is derivable from b delta there should be a small part of that big string which is derivable from the non terminal b as well. So, this implies a string derivable from b as well this is the reason for adding the item b going to dot beta corresponding to the production b to beta of b to the state that we already have. So, if the state contained a going to alpha dot b beta and we expect that we see a string derivable from b delta we must correctly add the items b going to dot beta as well to announce that a small part of the string is derivable from b which is nothing but the string derivable from beta. Now, if this is about the closure. So, in summary when we add something because of the closure operation we are only announcing that parts of this big you know sentential form are derivable from the non terminals that are present in the sentential form. If i is the set of items valid for a viable prefix gamma then it is important to note that all the items in closure i are also valid for gamma. So, I already kind of showed you this before, but let me show it to you again if a to alpha dot b beta is valid for the viable prefix phi alpha 1 then b beta b to beta is a production we consider the derivation s going to phi a t phi alpha b delta t and then phi alpha b x t and that becomes phi alpha beta x t. So, we are applying the production b to beta here and we are applying the production a going to alpha b delta here. So, this particular derivation shows that not only is the item a going to alpha dot b delta is valid for the prefix phi alpha you know b to dot beta is also valid for the prefix phi alpha. So, see this here this is our viable prefix and this beta is the handle and since dot is at the beginning of the item b going to dot beta see the dot is right here. So, it is valid for this particular viable prefix phi alpha. So, phi alpha is here phi alpha is here. So, both the items are valid now what about the go to? Go to of i x is the set of items valid for the viable prefix gamma x. So, here the about derivation that is this derivation also shows that the item a going to alpha b dot delta is valid for the viable prefix phi alpha b I already explained this before. So, in the same production phi alpha b can be our viable prefix. So, in that case the item would be a going to alpha b dot delta. So, that would be valid for the viable prefix phi alpha b. So, this is the intuition behind the construction of the automaton. So, let us look at the entire algorithm. So, how to construct a set of items for the grammar g prime which is the augmented grammar to begin with form just one item s prime going to dot s and take its closure. So, you get a set of items now. So, now you know until more sets cannot can be added to the this thing the item set you know rather set of item sets c. We form a go to and then you know if go to of i comma x not equal to phi and go to of i comma x is not in c add that to the collection c is the collection of set of items. Now, c union go to of i x now go back you have one more state in the or the item set in the collection. So, we keep on you know applying the go to go to in turn applies closure as well and this is how we keep doing it until we cannot get any more new states. So, each set in c corresponds to a state of the l r 0 d f a and this is the d f a that recognizes viable prefixes. So, let me explain the operation of set of item construction of the set of items. So, to begin with this is our start state s going to dot e hash. So, the old start symbol was e the new start symbol is s. So, this is the augmented grammar instead of s prime I have just used s here. So, s going to dot e hash is the first item from which we add all these items because of the closure operation. So, observe the e after the dot. So, these three items these two items get in and then the this item also gets in. So, these are the productions for e and because of the t these two items get in. So, we have one state now. So, let us advance the dot systematically for each item. So, now the dot goes to the second position. So, it becomes e dot hash. So, this is the go to state you know go to of state 0 on e will be state 1. So, we have e dot hash and then e is also the symbol after the dot for these two items. So, these two also go into the same state none of the symbols after the dot or non terminal. So, this state cannot grow further because of the closure operation consider this item it gives you e to t dot and that is the only item in this state 2. Then we get t going to i d dot that is state 3 and we get state 4 from advancing the dot in this item dot parenthesis dot e parenthesis. So, e is the non terminal. So, we add these three items and then because of this non terminal we add these two items. So, we have exhausted all the items in state 0 now, but we have added you know four new states state 1 state 2 state 3 and state 4 and we need to apply the go to operation on these states as well. This gives us e hash dot that is state 5 and it cannot grow further. This gives us e plus dot t and the closure operation adds these two because of the t after the dot e going to e minus dot t the dot advance you know the state go to of this would be e minus dot t. So, that is this state and two more items are added because of the closure operation the dot t after the dot. So, that exhaust this state this cannot give us any more this also cannot give us any more this cannot give us any more, but this can give us many more really. So, the dot is advanced after the e. So, dot e sorry parenthesis e dot parenthesis. So, that gives us parenthesis e dot parenthesis and these two are also added to the same state because the non terminal e exists immediately after the dot, but we cannot grow this state further. Then e to t dot is already a state available this t 2 parenthesis dot e dot e parenthesis is the self state here and t 2 id dot is already a state present. So, this is exhausted this does not give us any extra, but this can. So, this gives us e plus t dot and that is state 9 these two do not generate any more new states they generate just this state and this state respectively. This gives us a new you know this cannot this gives us a new state e minus t dot these two cannot this is already a state which cannot grow further. And this particular state does not give us the uses only one extra state that is parenthesis e parenthesis dot and these two do not generate any extra state these they actually generate 6 and 7 respectively. So, these are the entire sets of items that get generated because of the you know closure and go to operations and the shift and reduce actions are actually derived using this particular set of items. So, we will look at this particular you know method of filling the parser table using the sets of items in the next lecture. Thank you.