 We are asking the question, can a multi-tape Turing machine recognize a language which no single tape Turing machine can recognize and we would like to show the answer is no. What it means is that if there is a language which we recognize by a multi-tape Turing machine that same language can also be recognized by a single tape basic Turing machine the kind of Turing machine we first defined and this proof of this answer we would like to illustrate by means of an example where we have two tapes and the same idea that we use in this example can carry over to multi-tape machines with number of tapes which is more than two and in this case we had said that a move of such a machine will be described now by not a quintuple or a tuple of five elements but it will have eight elements why because you need to describe given a present state and the two symbols in the two tape in this example we said that the present state of the Turing machine is Q and in tape one symbol is a tape two symbol is b if such a situation occurs then the next state let for example is p and symbol that is written on tape one is a prime and symbol that is written on tape two is b prime now since there are two read right heads their movement did not be synchronized in the sense one can move to the left and other can move to the right or both can move to the left or both can move to the right for this example we are saying that in this particular move when the present state is Q and the symbol in tape one is a and the symbol is tape two is b in tape one you the machine writes a prime in tape two machine writes b prime and the tape head is for the first tape is that movement is to the left and tape two head movement is to the right. So, after the next tape from here is going to be that a and instead of a a prime would be written instead of b b prime would be written the same place and the tape head in first tape will move to the left by one cell and for tape two the head movement is to the right. So, next time instant you will find the Turing machine the two tape Turing machine in this position of course, there are other symbols and this kind of tuples are given then this machine will work on from one step to the next step and to the next step now how can we do what this machine does by means of a single tape Turing machine. So, there are several questions we need to first settle that first question being how shall we represent the contents of the two tape on a single tape there are various ways of doing it, but what is convenient and at least for this lecture the what we can do is to will consider a tape a single tape of course, which has four tracks. So, let us be clear what we are doing this is a two tape Turing machine and this is doing something let us say it is recognizing a language and we would like to recognize the same language by means of a single tape Turing machine let us call that Turing machine as T and right now what I am describing you is that this single tape Turing machine T will have four tracks now the significance of this four tracks will be that in this first track will have the contents of tape one of the two tape machine M. So, for example, if here somewhere will have and contents of tape two will be written in the third track. So, let us say here we had d b prime c c here will have did not be aligned, but in this case you see there is really no question of alignment between this tape and this tape. So, let us just to say what we mean is will say that this is here d b prime c I am sorry this will write here in this tape this track will have contents of tape one and this third tape will have contents fine, but we would also need to represent the positions of the head. So, in this example situation the tape head of first tape is scanning the symbol b this b. So, the tape head of you know this particular fact we can represent by the situation of this kind what I to the point is the second track will be blank except there will be one in one cell to represent that the cell corresponding to this cell right above here that is the cell which the tape one head is scanning and similarly for track four. So, in track four will have this particular c is being scanned. So, one here and all these are blank these are whatever some symbol are there may be blank you do not know. So, you what we are going to do is that one movement or one move of this machine m will be carried out by a number of moves of this machine t, t is a single tape turing machine with four tracks and what it does it will can imagine that it will carry out a number of steps to simulate one step of this machine. One step of this machine of course means that the two symbols are updated under two heads and the state is changed and the two heads move you know in directions appropriate. So, this is what we would like to carry out and now the I will try to give you the overall idea here. So, imagine the beginning of simulating one step of this machine by this single tape turing machine at that time and in the beginning of the simulation of one single move of this machine we can assume that the tape head this is remember this is a single tape turing machine. So, it has only one head. So, that particular tape head will be scanning that cell which is which contains the left of the two symbols under the two heads. So, the situation is like this that you see on tape one this B is being scanned and on tape two this C is being scanned. Now, when we represented like this the B that is being scanned is to the right of this C. So, we will imagine and that is the kind of invariant you can assume what is the invariant that is at the beginning of simulation of one step of this machine the tape head of one tape turing machine is positioned to that cell which contains the left of the two symbols being scanned by the two tape machine. So, in because of that we can assume that in the beginning this is where the tape head of the single tape machine is and now you see from this what it knows as it is scanning all these four symbols because this is a single tape turing machine with four tracks. So, as we had explained that in a move it is scanning all the symbols in the track it will be scanning. So, therefore, it sees B on the top track blank on the second track C here and one immediately it knows that this is the symbol which is being scanned by the tape two of the two tape turing machine tape two because the right see the of of all these of these four symbols the one which is here is under C which is on tape third track. So, which is the which corresponds of course, to the tape two. So, now it knows that the machine M on tape two is scanning C and let us say it is in a it also knows what is the state of the turing machine M which is of course, Q. So, now the state of t is going to be composite object one of the things one of the components of t is tape t is state is going to be the state of M. So, it will it remembers in its finite head what is the tape what is the state of turing machine M. And now what it can do you see right now does it know what will be the move of machine M no because it does not know what is the symbol tape one head is scanning. So, however, it sees the moment as it is scanning this it knows the tape two symbol is C. So, let us say it will carry this information after reading this it puts the information that the tape two symbol here which is being scanned is C. So, let us say that it can carry that symbol in a component C and now it knows for the tape one symbol it needs to move its head to the right it can be somewhere here, but it knows because of our invariant that the second symbol is going to be on the right of the symbol that you first have seen in the beginning of the simulation of one step of M. So, it will move to the right this tape head is going to will move to the right till it finds a one in the second track it will find immediately because in this example it is right here. So, it moves to the right you see having picked up symbol C in its head there is no problem because the number of symbols is always finite for any Turing machine therefore, for M also. So, that can be stored which particular symbol is being had been read can be kept in the state. So, now it moves to the right and sees that B is the symbol which is being read by the first tape head. So, now it knows the machine is in state Q which machine this machine it is trying to simulate machine is in state Q it is scanning the situation is that it is scanning B on the first tape C on the second tape and now it fully knows capital T this machine knows what is the present state of the machine and the two symbols being scanned. So, that information is enough for T to know what state M will be in the next instant next step and what will be the two symbols which will update the current symbols in two tapes right. So, therefore, since it knows the information it knows what is to be done. So, for example, if the situation was that in state Q if you are reading B on the first tape here C here you will update B to let us say D and C to let us say is E and this head will move one step to the left and this head will move one step to the right supposing this is the situation that would have happened and the new state would have been Q prime. So, the simulating machine would know all that because after all that is a finite piece of information which is the transition function of this machine. So, from Q what is the sec in this case it is the second tape remember because it is that is the that that was to the left. So, essentially one of the components here would be that which is to the left of I mean relatively speaking the two heads which is to the left and which is to the right. So, for this representation is concerned. So, this component two is saying that tape to head was relatively when once we put like this was to the left. So, it has all that information. So, now it can complete one step simulation of this machine by updating the situations. So, what was the situation we said that the symbol would have been would have become D. So, tape one symbol should have become D this should become D and what is the symbol here the symbol here was you said that it will write E and move to the left it will write E and move to the left. So, what it should do. So, let me circle it that this is the situation that we want, but and tape one head will move one step to the right. So, this one should be made blank and the one should come here and this one should be made blank and it should come here because the tape two is moving to the left and tape one is moving to the right. So, let me emphasize once the simulating machine knows the state present state and the two symbols on the two heads it knows what all the updates that are going to take place. It is only a matter of correctly showing or are doing the updates here in the representation and so you can you can see what it is going to do because it had said that this the Turing machine first head will move to the right. So, therefore, the what was one here that is made blank and since it head is supposed to move one step to the right the one in this track will be positioned to the next and similarly since here the head this head was moving to the left and this will come here. So, all this update will be done and then the machine will move to the all the way to the left as it moves to the left you see as it moves down it can update this particular symbol which will do and it will come here writing one and now it is positioned correctly you see that in the second tape it is scanning B prime which it is and on the first tape it is scanning A prime which is also the situation and now from all this it is ready for the starting of simulation of the next step of this machine. So, that time this will be since it has moved to state Q prime the composite state here will be Q prime and right now it has not read what was to the left symbol. So, you can say anything something is written and and still is it is the same situation that is the tape to is on the tape to head is relatively to the left and so this two remains and I have not shown certain other things here that is another component which is for doing all these kinds of things. See for example, it remembers that it should put a put a whatever see from the when it sees the two symbols on the two tapes for this machine then at that time it knows that what will be the new symbols, but that time the head of this machine is positioned under one of those which it can update immediately, but then it should remember what was the other symbol. So, that will be another component it will remember that and as it moves down when it sees the correct place it will make the update and so on. So, you see that the idea is that this machine has in its state several components and of course, it needs its something for doing some information or some state information for doing its own jobs namely moving to the left moving to the right. So, for that also you need to have some information in its head and so all that that is also is a part of the composite state information of this machine. So, to summarize this whole thing what I am saying is in the beginning of beginning of what in the beginning of simulating one step of this particular machine. We assume that the machine head is reading that symbol here which is of course, there are four symbol basically there are four symbols in four tracks that particular position its head is which corresponds to the symbol which is being relatively to the left of the two tapes in that representation and it also has in its finite state the state the machine that is being simulated the current state somewhere here and then this machine will move to the right to look for the symbol on the other track once it knows both the symbols it is going to update one symbol and then move to the left to update the second symbol also simultaneously it is going to update in this process the ones remember these ones are really markers as to the position of the two heads. Now, one slight detail that you should remember that in this one in this step that we had given example for the relative position of the two heads are not changing, but it can so happen that you know right now it is to the left this head is to the left of this, but this head can keep moving to the right and this head can keep moving to the left and then sometime the head here will become relative to this particular head to the left for that representation. So, when that happens this will be updated and that is that situation is for that situation to happen the two heads must have been adjacent or simultaneously in the same you know basically in the same four tuple of the track. So, it is not a very difficult thing if you think about that the simulating machine will know when the relative position of the two heads are changing for each step of this machine I said that the simulating machine will make a number of steps how large that number can be. So, you see what is the worst case the time simulating time is spent most of it really is that looking for where the other tape symbol was. So, the point is how far apart the in that representation these two heads can be in the worst case you see what can happen is that in every step one of the heads moved in one direction let us say this head kept moving to the left this head kept moving to the right. So, in if this machine has taken T n steps then the relatively the two heads here two ones here will be at a distance of two T n right is that clear what I am saying what I am saying is that the worst case situation is worst case for what worst case for the in that representation relatively how far away the two heads can be in T n steps the two heads can be around two T n you know cells away here. So, in the worst case if this machine to recognize something to T n steps in the worst case that particular machine will take two T n steps for each step of m. Now, m is taking T n steps this is taking two T n steps for each step of that. So, total number of steps is going to be order T square n. So, there is a quadratic blow up in this simulation something was happening in the in T n steps you would require T square n steps here. So, that is because you know ultimately we will see that our concern is that if a simulation can be done in polynomial time then that is good enough for us there is a quadratic blow up, but this is a polynomial if this is order T n this is order T square n. So, the blow up is the simulation cost first thing is quadratic which is. So, that completes our discussion that multi tape Turing machines can be simulated by a single tape Turing machine and therefore, having a number of tapes does not add extra power in terms of recognition of languages over machines which have only single tape. So, this our basic model is robust enough although I should say in fact we will see examples that having a number of tapes can be convenient or, but the point that is we made in this is that it may be convenient to have a number of tapes to carry out some work, but it is not essential if pressed you can do it with a single tape. Our next topic is non-deterministic Turing machines we have not defined what non-deterministic Turing machines are, but from our knowledge of finite state machines and push down automata it is not difficult to define the in an appropriate manner what is a non-deterministic Turing machine. Non-determinism means that in a particular situation the next move can be taken as one of several possibilities and the choice is made non-deterministically. So, if you go back to our notion of a single tape Turing machine we had present state let us say P and at that time let us say the symbol that is being scanned is A. Now, if the machine is deterministic that this will completely determine what the next state is what the symbol that would be written here and which direction the head will move, but suppose we have a Turing machine where for same or identical values for the first two components we had several different quintuples. So, suppose we had also we had P A Q 2 B 2 let us say R and let us say P A Q 3 B 3 L. So, suppose our Turing machine that we had in that definition of the Turing machine the situation was that we had a quintuple P A and this as well as we had another quintuple you see this is a distinct quintuple in the sense Q 1 and Q 2 are there different may be they are also different. So, for the same present state and present symbol we had a number of quintuples. In that case what can we say the machine when the present state is P and it is scanning the symbol A it has three choices either it can go to state Q 1 writing B 1 in the current cell and moving to the left or it can go to state Q 2 writing B 2 and then move to the right or it can choose to go to state Q 3 writing B 3 in A and move to the left. Now non-determinism means that the machine non-deterministically will choose one of these three quintuples when the present state is P and the current symbol being scanned is A. So, really speaking a non-deterministic Turing machine also will be described in a manner formally as a set of quintuples and of course it will have an initial state that we make it unique and it will have a number of accepting halting states it will have tape symbols it will have inputs symbols all those things are similar only thing in the set of quintuples that go to define the Turing machine in case of non-deterministic Turing machine. We may have several quintuples zero or more quintuples for the same single same present state and present symbol combination. Now what happens? What happens as we said the in an execution of a particular run of a non-deterministic Turing machine the Turing machine will non-deterministically choose one of the quintuples one of the possibilities it can make use of and go ahead. So, this situation we can visualize in a following way that initially the Turing machine was in a certain configuration and for a single tape Turing machine that configuration is that you recall our convention of writing configurations there is the initial state in that initial state it would be scanning the left most symbol of the input the input is x and of course rest of the tape is blank and that is the initial configuration and that we write like this. Now in case of deterministic Turing machine from this configuration deterministically or uniquely the machine will go to another configuration and then another configuration and so on. So, the computation is of course in case of a deterministic Turing machine if we think in terms of configurations that it will be a linear chain of configurations starting from the initial configuration this. Now this is the situation when the machine is deterministic now what happens when the machine is non-deterministic in case of non-deterministic machine such a linear chain of configurations representing the execution of a Turing machine is no longer possible why if we want to capture what all things that could have happened see because this is the initial configuration now they may be from this if such was the situation there are three things they may be three configurations. So, this if your initial configuration C 0 we have let us see is possible that from C 0 the machine can go to C 1 from C 0 the machine can go to C 2 from C 0 the machine can go to C 3. However, when we are to speak of non-deterministic machine as opposed to deterministic machines we cannot say from configuration this the machine will go to the configuration that all we can say is that the machine is in some present configuration then there are a number of configurations and the machine can go to any one of them. So, in this example situation supposing from initial configuration C 0 it can go to any one of these may be from here it can go to these two configurations may be from here it can go to four configurations and may be from this it can go to only one configuration. So, the point I would like to make is in case of non-deterministic machine if you want to capture fully all possibilities then you should think in terms of a tree of configuration. So, this is called a configuration. So, in general this is from the starting initial configuration we have a tree of configurations in case the machine is non-deterministic. So, when we say such a machine accepts the input and the machine accepts the input if for any one of these you know supposing you can there is a path from the root in this tree such that in that path you reach an accepting configuration. In that case we say the input would be accepted by the non-deterministic machine. The definition is very similar to other non-deterministic situations that you might have known namely finite state machines and push down automata at least for these two you have seen non-deterministic versions. So, you remember that a non-deterministic machine for a non-deterministic machine we see that the input is accepted if there is a sequence of non-deterministic choices which can lead the machine from the initial configuration to an accepting configuration. So, basically a path in such a tree means what that means that you are or the machine is exercising certain choices in moving down the tree those are the non-deterministic choices. So, therefore once more let me emphasize that acceptance by a non-deterministic machine can be seen as a path in the configuration tree path from the root to a in the configuration tree where the path ends in an accepting configuration. Now, the tree itself can even be infinite in that some paths here may be non-terminating you know some paths here may be non-terminating that is fine, but still we say since there is one particular path from the root to an accepting configuration the input is accepted. Let us formally capture the notion of acceptance how do you do that basically will update the same formalism that we used to explain what we mean by acceptance by a deterministic machine. So, you recall that we had a relationship from between configurations C i and C j and we read this as you know in the context of a particular Turing machine M in case of deterministic thing if the machine is in configuration C i then the next configuration will be C j and that situation we represented by means of you know this notation which is really speaking that this you know you read it in many ways you know this is a turn style symbol and you can say that C i in one step leads to C j. Now, you remember that in case of non-deterministic machine in one step C i may go to several things like C j 1 C j 2 C j 3 dot dot dot C j M. But there will be a number of configurations the machine can go to. So, for non-deterministic machine this same one step you know relationship one step derivation relationship amongst configurations we will can use the same thing provided we understand this as from configuration C i to in a single step the non-deterministic machine go to configuration C j. So, there is no determinism so we cannot say from C i the machine will go to C j. But if there are several possibilities essentially in this situation we would see that C i can go to any of the C j. So, let us say write you know k here k can be in this example k can be anything from 1 to M. So, let us repeat once more suppose I had a Turing machine in which which is a non-deterministic Turing machine and in curve from configuration C i the machine can go to all the configurations then what we can say is we will write that C i in one step can go to C j k for all these values of k. So, what compared to the deterministic case we are changing the meaning of this symbol the relationship there in case of non-deterministic machine the relationship was between two configurations C i C j such that C i from configuration C i the machine will go to the configuration C j in case of non-deterministic machine all that we do to update is that we say again these two configurations are related provided from configuration C i in one step the machine can go to the configuration C j. So, once I understand this then what we can do is the same thing that we did for deterministic machines what we did was from this particular relationship or you know relation amongst configuration we defined another relation which stood for this means that in 0 or more steps the machine under discussion can go from the configuration C i to configuration C j means now stands for configuration this way that the machine the non-deterministic Turing machine can go in 0 or more steps. From configuration C i to C j all right. So, in this configuration tree what would mean what it would mean that for example this is C you know may be star here. So, clearly we can write C 0 in 0 or more steps can go to from configuration C 0 you can go to C star but you see from C 2 you cannot go to C star because I mean if C star does not occur in the some tree here you know it may the same C star same configuration may occur somewhere here but if you that situation is not the case then from C 2 you cannot write that you can go to C star. So, essentially in terms of the tree any node and it is any one of its ancestors will be related by this relation right. So, once we understand this relation you know as a as a simple generalization of a single step whatever what can happen in single steps then it is easy to write. So, this is what the language accepted by a particular Turing machine is. So, suppose m is an NDTM, NDTM always will is the abbreviation one uses for non-deterministic Turing machine. So, m is an NDTM and the language which is accepted by this NDTM. NDTM is defined to be this is the symbol to say that this is by definition is the set of all strings over the input alphabet sigma such that q 0 x remember this sorry it is not sigma 0 q 0 x q 0 being the initial state of the machine m. So, q 0 x is the input initial configuration in 0 or more steps you can go to some alpha p beta for some alpha p beta and this p is an accepting. So, essentially the idea is very simple that if starting a non-deterministic Turing machine in its initial configuration if there exists a sequence of configurations such that the machine can go can go from one configuration to the next leading to a configuration which is an accepting configuration in that case we say the string is in the language of the machine m. Now therefore, the question which now is I am sure you were asking is that see clearly non-determinism seemingly adds power because it now can make choices non-deterministic choices a non-deterministic machine can make non-deterministic choices now is this power such that that with this power a non-deterministic machine can recognize a language which is not there in the class of languages recognized by deterministic machine. So, the question that we have to answer for or is there a language m there is a non-deterministic Turing machine in m accept it that is there and n d t m m such that this language l is the language accepted by this n d t m and or we can say, but there is no deterministic Turing machine the same language. So, the question is is there a language l for which there is a non-deterministic Turing machine m to accept it, but there is no deterministic Turing machine to accept the same language if the answer to this question is yes then non-determinism indeed adds power extra power recognition power over deterministic Turing machine and if the answer is no non-determinism will just be a convenient tool to express some languages to express some algorithms, but whatever it does in terms of language recognition could be done by deterministic machine. So, essentially this question is about whether non-determinism in case of Turing machine adds extra power over deterministic machines for recognition purposes or not.