 Welcome to part 2 of the lecture on global register allocation. So, in the last part, part 1 of the lecture I told you about the issues in global register allocation. So, which register to use, where to you know place, which variables to place in the registers etcetera etcetera. And I also meant you know told you about the problem as such. So, let us begin with the problem definition. So, global register allocation assumes that allocation is done beyond basic blocks and usually at the function level. The implication is that we are not actually limited to just basic blocks in the program to do register allocation, but we can also do it at the higher level namely the function or may be a group of loops etcetera etcetera. So, these you know this sort of global register allocation actually becomes much better than a local register allocation because it saves a lot of stores and loads at the boundary of the basic blocks. There is a very important decision problem related to register allocation. So, the problem is in the slide typically it says you know we are given a number k and we are going to represent the program in the form of a control flow graph that is the assumption. And then depending on the number of variables in the program is there an assignment of registers to program variables such that no conflicting variables are assigned the same register. So, the requirement is if there are two variables and if they actually carry values at the same time in the program then they are said to be conflicting variables I will give you a proper definition of this a little later. So, if the register if the variables conflict then they cannot be assigned the same register. The other thing is we should not introduce unnecessary loads and stores and we must use at most k registers for the program. So, this problem has been shown to be NP complete way back in 1970 by Ravi Sethi. Therefore, the heuristic way of solving the problem is the only way ahead graph coloring is one of the most important and popular heuristics that is used and that is the one we are going to discuss as well. There are simpler algorithms. So, we will look at one of them possibly for the loops. So, let us see what conflicting variables are. So, two variables interfere or conflict if they are live ranges intersect. So, that brings us to another terminology called the live range. So, a variable is live at a point p of the flow graph if there is a use of that variable in the path from p to the end of the flow graph that is what this definition says. So, let us look at an example to understand it. So, here is the flow graph. So, we have defined the variable a here and the variable b here. There is a usage of this variable a here and a usage of this variable b here. So, at this point definitely we can say yes there is a usage of the variable a. So, and similarly at this point we can say that there is a usage of the variable b as well. So, this is what we mean by usage of a variable. So, from here onwards the live range of a starts from its definition and it. So, b 2 then at b 4 also we can answer the question of usage is there a usage of the variable a yes. So, this block and this block constitute the live range of the variable a. Similarly, b 3, b 4 and b 6 constitute the live range of the variable b. So, very strictly speaking we actually have to look at the program points. So, typically instruction number in the basic block and the basic block number. So, the range live range of a variable is the smallest set of program points at which it is live. So, in this example here and here these are the three places where the variable a is live. So, these three form the live range of a and these three form the live range of b. So, let us look at a simple algorithm for global register location the region here is not a complete function, but we will look at loops. So, to begin with we will see how to do allocation for single loops and then apply the same algorithm for multiple loops as well. So, this algorithm can be used to allocate registers for variables used within loops and it requires information about the liveness of variables at the entry and exit of each basic block. Why do we require this information? The loop is somewhere in the middle of a program. So, if there are variables which are live on entry to the basic block that is the variables are used within the loop and this can be brought down to the level of the basic block. So, if the variable is live at the beginning of a basic block then you know it is used within the basic block. So, and if the variable is live at the exit of the basic block then it is used beyond the basic block. So, if this is the way it is. So, if the variable is live at the entry of a basic block then we must load that variable into a register at the entry and if it is live at the exit of the basic block then we must store the variable into memory at the end of the basic block. So, to make sure that the costs of doing these operations is actually computed properly we require this live in live information and why are we not bothered about the variable you know as throughout the program once it is computed into a register then obviously, it stays in that register till the end of the basic block. So, we are only interested in the variable computation the first time the rest of the time anyway it stays in the register. So, there is no extra saving that is possible. So, we are not interested in the usages of variables after the first computation is kind of over. So, then the load store instructions also cost two units. So, because they occupy two words this is the assumption in computing the savings when we compute what is known as a usage count. So, let us understand how to compute the usage count. So, there are two components here. So, the first one says for every usage of a variable v in a basic block until it is first defined as I said once it is defined we are not bothered because it stays in the register. So, until it is first defined we will say if it is assigned to a register then you know we save something. So, savings v is equal to savings v plus 1 and after v is defined it stays in the register. So, we do not have to worry about the there is nothing extra saving possible. So, this is one part of the saving that you are rather the usage count. Then there is another part for every variable v computed in a basic block if it is live on exit basic block then count a savings of 2 since it is not necessary to store it at the end of the basic block. So, the point is if we assign a register to this variable you know which is live on exit then that means you know it stays in that register and we really do not have to store it that is the basic idea otherwise we would have really stored it at the end of the basic block. Then with these two components we are going to have the savings as sigma over all the basic blocks in the loop savings of the variable v in the basic block plus 2 star a factor called live and computed v comma v. So, this is basically you know the variable being computed this is either 1 or 0. So, this 2 star live and computed corresponds to this part. So, the variable v computed in a basic block if it is live on exit. So, whereas the first one corresponds to this part. So, we do that you know and there is a minor factor which we ignore. So, to the on the entry and exit of the loop the points of entry and exit of the loop we have to load or store a live variable for this we require 2 units for the load or 2 units for the store at the exit. But these are one time cause because this is actually required only at the entry of the loop or entry of exit of the loop we are not talking about the entry and exit to the basic block that has been taken care of here. Once we compute the total savings these variables whose savings are the highest will reside in registers. So, let us look at an example a very simple basic block the variables b c and f are live on entry to this loop and along this path a c d f you know variables are being used. So, a c d and f are live at this point a c d and d are live at this point c d and f are live at this point b c f are live here and of course, exiting from the loop a b c d f are live here. So, it is easy to check why you know liveness is like this a c d f are being used here c a c d and f all the 4 are being used that is why it is all these 4 are live. So, let us compute the usage count it has been listed for each of the variables here. So, let us take the variable a and again the cost has been listed for each of the basic blocks. For the basic block b 1 the cost is 0 plus 2 why the first component. Obviously, corresponds to usage before it is defined. So, a is directly defined here in the first statement itself. So, there is no usage before definition. So, the cost is 0 here and then a is indeed being computed in the basic block and then it is also live on exit from the basic block. So, remember this. So, count you know for every variable b computed in the basic block if it is live on exit from the basic block. So, it is indeed live a variable a along both the paths. So, we have 1 into 2 into 1. So, that is the cost here. So, this is 2 in the basic block b 2 we have a usage of a before any definition of a. So, that is cost 1 and we do not have a definition of a. So, the second factor becomes 0. Similarly, we have a usage of a in b 3. So, that cost 1 and the second part is 0 because there is no definition of a here. In the fourth one there are there is neither a usage of a nor a computation of a. So, both the cost are 0 here. So, this total cost is 4. Similarly, for b we have 3 uses of the variable b you know and it is before any definition of b. In fact, there is no definition of b at all. So, the first cost is 3 and the second cost is 0. Here we do not have any usage of b before its definition usage of b before its definition and b is not live on exit from the basic block. So, both the cost are 0 here. Similarly, the same is true for this there is no usage of b and b is not live. So, these are 2 0's here and the fourth one v 4 the first component is 0 because there are no usages of b before its definition. B is computed in the basic block and it is live on exit. So, the cost is really 2. So, 2 into 1. So, this cost is 5. Similarly, we can compute the cost of c, d, e and f as well. Now, 5 goes undisputed and that will be one of the variables to which a register can be given. Then there are 3 contenders a, d and f. 2 out of these can be given registers. We have arbitrarily picked a and d. It really does not matter which is given here. It could have been a and f as well. So, once we provide registers to these variables the code will also become different when we may generate the machine code. We will have to make sure that we refer to the register corresponding to a, b and d and we must make sure that of course, a is not live here, but b is. So, b will have to be loaded on entry to the loop and a and b will have to be a, b and f will have to be put back into memory by using store instructions at the exit of the loop. So, these are all things that will have to be done using the register corresponding to the 3 variables. So, what happens if we have nested loops? So, let me tell you using this example. So, here is the small loop l 2 which is actually. So, here is the loop l 2 and it is embedded within the loop l 1. So, there are some basic blocks here and here as well. So, how do we now allocate registers to the nested loops? So, the procedure is to assign registers for the inner loops first and then consider the outer loop. So, we have l 1 nesting within the loop l 2. So, let l 1 nest l 2. So, l 2 is being nested inside the loop l 1. So, that is what this means. The rule is for the variables assigned registers in l 2, but not in l 1, load these variables on entry to l 2 and store them on exit from l 2. So, let us see what this means here. So, let us say variables x y and z are assigned registers here. So, we did the register allocation. So, they got the registers here, but when we did the register location for l 1, they did not get any registers. So, variables should get registers only here not in these two places. So, what we need to do is very obvious. We need to load x y and z on entry to this loop l 2 and then we need to store x y and z on exit from the loop. So, once this is accomplished, you know that is it. We have to take care of these cause and that particular allocation will work properly. Case 2, a, b and c are assigned registers in l 1, but they are not assigned any registers in l 2. So, in such a case, you know when we actually enter the loop l 2, because the registers corresponding to a, b and c will be given to some other variables, we need to store their values in a memory and then when we exit the loop l 2, we need to put the values from the memory into the registers again for corresponding to the variables a, b and c. So, if this is done, then the loop will work properly. Then the variables p, q are assigned registers in both l 1 and l 2. Obviously, we do not require any special action at all, you know they just continue from one loop to another. So, this is the usage count algorithm, usage count based algorithm for allocating registers to, you know variables. So, now let us look at a very fast register allocation scheme. This is called as linear scan register allocation. So, the reason now it is called linear scan will become clear very soon. This is due to, you know Paleto and Sarkar and it was actually published in 1999. It uses the notion of what is known as a live interval rather than a live range. So, live interval is an approximation to a live range. So, in other words, live ranges are subsets of live intervals. It is relevant for applications where compile time is important. So, for example, in the dynamic compilation and just in time compilation, the compilation time is also added to run time because all the compilation happens on the fly. In such cases, using a very expensive register locator, you know is going to have a bad effect on compiler. So, we must use simple register locators. So, this particular linear scan register locator is something that can be used in dynamic and just in time compilers. Register location schemes which are based on graph coloring are very slow. So, they cannot be used in JIT and dynamic compilers. So, let us begin with a definition for the live interval. Assume that there is some numbering of the instructions in the intermediate form. Now, an interval i comma j is a live interval for a variable v. If there is no instruction with number j prime greater than j, such that v is live at j prime. Similarly, there is no instruction with number i prime less than i, such that v is live at i. So, let me show you an example here. So, we have the i and j here. This is the sequence of instructions that we are considering for the variable v to be live. So, if we take another instruction i prime, then any of the instructions before this will not have v as live. Such an i prime where v is live does not exist. Similarly, this is the last instruction where v is live. So, any other instruction j prime where v is live does not exist. So, if these are satisfied, these two conditions are satisfied, we say that i to j is the live interval for the variable v. So, this is a conservative approximation of live ranges. So, there may be sub ranges of i comma j where v is live, but these are ignored. So, in other words, if this sequence from i to j is a long sequence, it is possible that we define the variable many times within this range, but all such ranges of the same variable are included in the live interval from i to j. So, here is an example to show you that. So, obviously, here is the definition of a, here is another definition of a, here is the usage of a, here is the usage of b. But if you look at the text sequence for the instructions, the definition of a comes first, then the definition of b which is actually in the basic block, this basic block, then we have the condition corresponding to this basic block, then we have the usage of a and then the usage of b. So, these instructions will be numbered in some order, we are going to take the instruction number from this assignment to a and the instruction number where a has been used last. So, this entire range of instructions which includes the basic block v will be considered as the live interval for a. So, even though a is not defined or not used, you know a is not used here at all, so a is not live here, but since we are going to look at the text placement. So, from a to a, this definition of a to usage of a is going to be considered as the live interval for a. So, given an order for pseudo instructions and live variable information, live intervals can be very easily computed using just one pass over the intermediate representation. So, let me tell you how it can be done. So, we start scanning the instructions one by one. So, we hit a definition for a, let us some variable v, let us say, then we go on looking at you know other definitions and variables, usages of the variable v. So, once we know that the last usage of v has been met, then you know the entire range i to j is taken as the live interval for variable v. Interference among live intervals is assumed if they have an overlap. So, these are you know the live intervals are nothing, but intervals of integer numbers. So, if there are two intervals which overlap, then you know they have some common range between them. So, that is the basic idea. So, the number of overlapping intervals changes only at the start and end points of an interval. This becomes very clear as we take an example. So, let me show you that example here before we continue with the algorithm. So, there are many live intervals here i 1, i 2, i 3, i 4, i 5, i 6, i 7, i 8, i 9, i 10, i 11. Now, we say that you know, so the variable some variable is actually live throughout this interval. So, please observe that we are going to make a decision about register location at the start and end points of these intervals. So, the basically we are going to consider some information here and then the next point at which we consider it is here. So, at these points we are going to check whether some intervals are have expired and so on and so forth. So, the number of overlapping intervals changes only at the start and end points of an interval. So, this is how we you know look at the intervals. So, to do that live intervals are stored in the sorted order of increasing start point. So, here so the start point of i 1 is the first one, then we have i 5, then we have i 8, then we have i 2, then we have i 9, i 6, then we have i 3, then i 10, i 7, i 4 and i 1. So, this is the sorted order of the intervals. At each point in the program the algorithm also maintains what is known as an active list. So, the active list of live intervals correspond to the intervals which are overlapping the current point and of course, very important these are the intervals which have been placed in registers. So, the intervals which are active but have not been placed in registers actually are the ones which have been assigned to memory locations. So, they will not occur in this particular list, we will see how this happens. So, this active list is kept in the sorted order of the increasing end point. Remember live interval list is stored sorted according to the starting point and the active list is stored you know in the sorted order of the increasing end point. So, this is the you know active list these are the active list at various points a, b, c and d. So, at a i 1 you know is the only one which has which is active and it let us say it has been given a register. At point b we have i 5 and i 1 both of them actually overlapping and let us assuming that they will be given registers they are the they are both be in the active list of b. At the point c i 1 has finished. So, it will not be in the active list anymore, but i 5 and i 8 will be. So, assuming that they are given registers they will be in the active list. So, at d here right. So, we have i 7 which has not finished yet i 4 which has not finished yet and i 11. So, this is so this is kept sorted in according to the end point. So, this finishes first then this finishes and then this finishes. So, if for example you know i 7 has never been given a register then i 7 will not be in the list even though it is overlapping with i 4 and i 11 it would have been assigned to memory. So, the active list at this point hypothetically will contain only i 4 and i 11. So, in you know how does the algorithm work. So, let me show you an example and then we will read through the algorithm detail. So, what we do is we have this sorted order of intervals. So, this is according to the starting point. So, we take the first item on the list. So, which is i 1 we have 3 registers that is the assumption. So, we give one register to i 1. So, we are left with 2 more. The next list you know the next interval in the list is i 5. So, at this point we check whether i 1 has finished it has not finished yet. So, we add i 5 to you know we check the number of registers. So, 2 more are remaining. So, we can give 1 to i 5 as well. So, i 1 and i 5 are now in the active list. Let us go to point c that is the next one i 8 is the next interval in the list. So, we consider i 8 and the starting point of i 8 is c. So, at this point we can check from the active list that i 1 which is present in the active list and has been given a register has completed. It has not active anymore. So, it can be removed from the active list and it is register can be written to the free pool. So, this was the list. So, we removed i 1. So, i 1 is still active. Now, we add i 8 to this list. Now, we can add it because we have 2 registers and we can give 1 to i 8. Next we take up i 2 right at this point neither i 5 nor i 8 have finished and we have 3 registers the third one being free. We can give it to i 2 and these 3 will actually be the on the active list at this point. So, after i 2 we consider i 9. So, when we consider i 9 we find that both i 5 and i 8 have finished. So, i 2 and i 9 will be on the active list and of course, they can be given registers. Then after i 9 we pick up i 6. So, at this point again i 2 and i 9 are both active i 6 can be added to the list and it can be given a register which is still available as a free register. When we go to i 3 you know. So, the i 2 has finished, but i 6 and i 9 are still active and the third registers is still free for i 3. Then we go to i 10 i 9 has finished, but i 3 and i 6 are active i 10 can be given the register which was freed by i 9. After i 9 we go to i 7. So, here you know i 6 has finished, but 3 and 10 have not finished and the register which was freed by i 6 can be given to i 7. Then we go to i 4 i 3 has finished i 10 has finished only i 7 is active. So, 2 registers are free one can be given to i 4 and at i 11 we need all the 3 registers because i 7 and i 4 have not yet finished. So, this is the way in which register allocation can be done for this simple example. So, now let us look at another example in which there is some shortage of registers. So, we have a, b, c, d and e as 5 live intervals and let us assume that there are 2 registers available. So, what we do here is you know we begin with a right. So, this is the starting point of a this is point number 1. So, we can give a register to a absolutely no problem with that. Then we have you know the live interval of b this is the point and at b a is still active and we have one more register which is free. So, we can give that to b no problem so far. So, a and b are now on the active list. So, we go to c. So, a and b are still live you know they are overlapping they are not finished and the 2 registers are already you know have been given to a and b now we come to c. The question to be asked is should we take away a register from either a or b and give it to c or just make c go to memory that is called as spilling. So, in this case the decision is made by looking at the end point of the 3 intervals which are active at this point. So, c actually has an end point which is much further than that of a or b. So, what we do is we the heuristic says spill c and put it in memory. The reason behind this heuristic is very simple because c takes a lot of time you know by that time may be if we put it in memory a and b may free the registers and we will be able to assign more variables to the registers rather we can probably give more number of variables the register. So, that is the hope and therefore, larger you know time intervals rather live intervals are assigned to memory by the spilling operation. So, c is give a actually put into memory and then at point 4 a expires right. So, a has finished that can be made sure of by looking at the end point of a. So, it does not have any overlap with that of d, but b is still active c is not in the picture because it has been assigned to memory. So, now, we can a has released a register and that can be given to d. Then we go to the starting point of e at this point b has finished that register can be given to e d has not yet finished. So, in this example we have actually sent c to memory spilled it into memory. So, what will have what would have happened if the duration of c was much smaller. So, for example, here 1, 2 and 3 are similar rather 1 and 2 are similar at this point of beginning the live interval c. We find that b is live intervals such as beyond that of either a or b. So, the candidate which is to be spilled is b and it is neither a nor c. So, what we do is we take away the register which was given to b, we now give it to c a retains its register because we had 2 registers this is fine. So, spill b since n point of b was greater than n point of c give register to c. Now, onwards everything is still because at the point d a free say register and that can be given to d at the beginning of e c has finished and therefore, its registers can be given to e. So, this is the algorithm. So, now let us look at the formal description of the algorithm to understand how it goes. So, the active list is to begin with made empty and for each live interval i in the order of increasing start point we execute the following code. The first is expire old intervals. So, all the intervals which have exceeded they you know rather have completed their time interval are now thrown away. I will give you details of this very soon. Then whatever exists in the active list is only the list of intervals which are still you know which have not expired and they have all been given registers. So, if the length of the active list is r then you know we must call spill at interval i and this may decide to give the you know put a register into with put a variable which actually has a register into memory or it can actually store the new interval itself in memory. We will see the details of this also very soon. If the register is free that means length of active is not equal to r. So, it is much lesser than r. So, there is at least one register free. So, then we can assign that register to the register you know the way we can actually assign the live interval i this particular register removed from the pool of free registers. So, now add i to the active list sort it again by the increasing endpoint. So, that you know we are ready for the next iteration. So, we still need to see the details of the two functions expire old intervals and spill at interval i. So, how do you expire old intervals? We are going to inspect every interval j in the active list. So, we look at the increasing end points of this because it is already sorted in that order. So, the remember the interval i is the new interval j corresponds to the older ones in the active list. So, if the end point of j is greater than or equal to start point of i that means j is still active then continue so we do not do anything we go to the next interval. If the end point of j is less than the starting point of i that means j has completed. So, remove j from active add register j to the pool of free registers. So, this is done for all the intervals in the active list. So, whichever has retired will be you know removed from the list and those registers will be given to will be added to the pool of free registers then the functions spill at interval. So, again i is the new interval and we must decide whether i should be as given a register or we want to put i into memory. So, let spill be the last interval in the active list last ending interval. So, if the end point of spill is greater than or equal to the end point of i. So, in other words the last interval in the active list ends much greater than much later than the new interval i then we take away the register that was given to this interval spill and give it to the interval i. So, register i equal to register spill this is the taking away operation and then the location of spill will be new stack location. In other words the interval spill is now banished to memory. So, this is the new location at variable location which was created on the activation record. So, that is actually now taken and given to spill. So, these are the locations which are on the activation record. So, these are the offsets which will be assigned to the intervals or the variables. So, now remove the interval spill from active. So, the variable corresponding to spill is now gone it is always going to be in memory. The new interval is added to active list and we have already given it a register and then the active list is adjusted to be in the sorted order of increasing end points. So, suppose the none of the intervals in the active list actually have an ending point which is greater than that of i that means i itself ends much later than any of the intervals in the active list. So, location of i is new stack location. So, we banish i itself to the memory. So, that is the way spill at interval works. So, these are the details of the algorithm that has actually done here. We took away the register which was actually given to be and we banished it to memory whereas, c was given a new register whereas, in this example the incoming interval c had an end point which is much greater than these two. So, this was actually sent to the memory whereas, these two retained their registers. So, this is the you know linear scan register location algorithm. Let us look at the complexity of the linear scan algorithm. So, as I said the complexity of the linear scan is very important. The reason being it is used in the dynamic and just in time compilers. So, suppose v is the number of live intervals and r is the number of available physical registers. Then suppose we use a balanced binary tree for storing the active intervals. Obviously, every one of the accesses for either insertion deletion or search can all be done in time log r. We have v number of live intervals to manage. So, the time complexity will be v into log r. Remember that the active list can be at most r long and that is why we are saying log r. Insertion deletion are the important operations. We remove something from the active list. We add something to the active list. So, empirical results which are reported in literature typically our you know Sarkar paper. They indicate that linear scan is much faster than graph coloring algorithm and of course, there is always a price to pay for this fast algorithm. The price is that the code run the machine code which is emitted is a bit slower. So, in the worst case it is about 10 percent slower than the code generated by an aggressive graph coloring based algorithm. So, graph coloring based algorithm is much better. So, that you know can enable more efficient code generation, but we are paying for you know the efficiency by the speed of the allocator and slow allocators cannot be used in dynamic and just in time compiler. So, now we move on to the next class of algorithms for register allocation. So, we now look at the graph coloring based algorithm. Graph coloring based algorithm way back in 80s Chaitin from IBM and a few others for the first time they proposed that graph coloring can be used to solve the register allocation problem quite well. So, what is the association of a graph coloring algorithm rather formulation to the program. So, nodes in the you know we have a data structure called the interference graph I am going to give you details of the interference graph very soon. The for example, to give you a you know first cut approximation if you look at the live ranges that we actually used in you know usage based algorithm and the live intervals that we used in our linear scan algorithm. The live ranges actually are the nodes in the interference graph and there are also entities called webs which are possible. We are not getting into details of web based register allocation in this lecture. So, now the nodes of the interference graph correspond to live ranges. Whenever there are two live ranges which actually run through the program at the same time that means they are active at the same time in the program same point in the program then they are set to actually interfere. So, an edge connects two live ranges that interfere or conflict with one another. So, the conflict is very similar to that of live intervals you know if the ranges overlap then they conflict. Usually we require two types of data structures one the adjacency matrix the other the adjacency list to represent such an interference graph. The reason is sometimes we want to compute the number of neighbors in the for a node. So, in such a case searching the adjacency matrix for the neighbors is very inefficient whereas, if we have an adjacency list then you know the searching for the neighbors is a very efficient operation we just search the list. Similarly, there are other operations which may be very efficient on the adjacency list whereas, adjacency matrix whereas, the adjacency list may be very expensive. So, both these are used by the algorithm and the overhead actually is maintaining both the data structures instead of just one. So, the basic idea is to you know take the graph and now the nodes of the graph correspond to live ranges. So, we assign colors to the nodes such that two nodes connected by an edge are not assigned the same color. So, this is the basic idea. So, the color corresponds to a register. So, number of colors available is equal to the number of registers available on the machine and then a k coloring of the interference graph is mapped on to an allocation with k registers. Assuming that we have k registers we try to use k colors. So, the difficulty is if the you know interference graph cannot be colored with k colors then it implies that with k registers we cannot do an optimal register location for the program. So, in such a case we will have to actually you know reduce the number of nodes in the graph. So, we may have to remove some of the nodes by what is known as a spilling operation. So, we try to do the spilling operation on the nodes of the graph reduce the size of the graph and then you know continue with the allocation. So, let me give you a very simple example here. This is an interference graph which is said to be two colorable. So, we have a green and violet two colors available here. So, green corresponds to one register and this violet corresponds to another register. So, in this case this is said to be three colorable. So, we have three colors corresponding to three registers of the machine. So, we have assigned a color here and the same color here, but it so happens that the two neighbors have two different colors that means the live ranges of this and this they are connected by an edge. So, that means they are activate the two variables are activate the same time, but since they are in two different registers there is no problem in the program the same is actually valid here as well. So, the basic idea behind Chaitin's algorithm is to choose an arbitrary node of degree less than k and then put it on a stack. So, remove that vertex and all its edges from the graph. So, you know that means this is the reduction of the graph this may decrease the degree of some other nodes and cause more nodes to have degree less than k. So, if you look at this suppose we you know have not assigned any colors here we take this graph take this node we remove this then we remove the two edges also connected to it the graph which is left out is only this part and we continue that operation on this part of the graph whereas, if we try doing it here if we remove this node then these two just go away and what remains is this graph. So, at some point if all the vertices have degree greater than or equal to k some node has to be spilled, because we cannot continue this coloring operation which we described here you know rather we cannot continue this reduction operation which we described here. If no vertex needs to be spilled that is the best part of it then successively pop vertices of the stack and color them in a color not is used by the neighbors reuse of colors is definitely possible. So, for example here you know we let us say there are two registers we remove this right and then we are left with this graph then we can remove this node we are left with this graph we remove this node we are left with just one node graph we remove this as well then the graph is empty. So, in the reverse order we can assign colors which are available we give a color to this. Obviously, then this gets a different color then this can this is added next that gets a different color and finally, this is added that gets a completely different color. So, this is the way Chaitin's algorithm would proceed. So, I will stop here now and consider the details of the Chaitin's algorithm in the next part of the lecture. Thank you.