 Okay, so this is the first paper for today, which is Turing's 1936 paper. I guess considered one of the papers that started the field of computer science and why people called Alan Turing the father of computer science. So I didn't know the connection until I read this paper a couple of months ago and I really felt strongly that a lot of things it said were very prophetic and good to share with the wider audience, which is why I prepared this talk. Okay, so as usual I would like to start with telling you what the main contributions of this paper. So the paper was in 1936, it's almost 80 years. So the first thing, the most obvious thing is Turing obviously proposed a model computation that today we call Turing machines and also the term Turing complete I guess is related to this paper. Turing complete is something that is equivalent to a Turing machine essentially. Okay, so one thing to say is that this is not the first paper that proposed it. So a church famously wrote a paper about lambda calculus that predates this by couple months. In fact he developed the lambda calculus in the early 1930s, 1931 onwards. So this is not the first model but it was the most convincing one and I'll try to explain later why it became much more popular throughout the years. The second is a concept of a universal machine which was it seems common to us now that you can have one machine or one computer that could perform any kind of task with a different kind of software. Back in the 30s that wasn't a thing that was even known right. You had to build a custom machine for each application so to machine to calculate different show equations or machine to calculate other things right. So you needed one machine for each task. So what Turing showed was that it's possible to just one physical machine but have different software so to speak and then let the machine do different things depending on you know the software. So that was a kind of a revolutionary idea I think at the time. I mean Turing was the first to come up with this idea. Okay in the last part which is related to the Einstein's problem is he showed that of course this is unsolvable based on showing that no Turing machine could solve. Again this is not the first paper to do this. Church did this a couple months earlier but he did it in a slightly different way and I'll try to show Turing's proof. Okay so let's go back to a little bit about why Turing came to write this paper. So I think we have to go sort of I'll go as far back as the late 1900s to David Hilbert. So Hilbert famously you know has this thing called Hilbert's program where he is trying to form an axiomatic foundation for mathematics. So he asked some questions along these lines right. So mathematics here meaning some formal axiomatic system that captures all of mathematics. Can we find such a system that is consistent, complete, and decidable? So decidable is related to the Einstein's problem because that in English is like the decision problem. So it's really the last property. So just for completeness I would say I have to briefly explain what all three are to get to what decidable means. So I'll use the white for that. Okay so in Hilbert's program you have a formal axiomatic system. Basically that means a bunch of axioms and some rules of inference. Typically those from first order logic. So that's the system. So axioms, axioms, and say rules of inference. So apparently it's a symbolic system. So you don't have to know what all the symbols mean. You just take the axioms and apply them, apply them, and eventually you can generate all the all the theorems, all the statements that are true in this formal axiomatic system. So to be consistent means, so let's say we have a statement s and then you have not s. So let's say this denotes a proof. A proof just means take the axioms and apply the manipulate them with the rules. That's what this sort of proof means. So if you could find proofs, if there's s, such as s, where you can find a proof of s and a proof of not s, then the system is not consistent. So the consistent means there's no such s where you can prove both s and the negation of s, which makes intuitive sense because well only one of them would be sort of true, right? Let's say s is a statement 1 plus 1 equals to 2 and not s is statement 1 plus 1 not equals to 2, right? So if your system could prove both of these statements and sort of inconsistent. So you can't find such a statement as consistency. Okay, the second property which is more important is completeness. So completeness is related, completeness says for every statement s, either s or it's inverse has a proof, which means we can cover all the statements. There's no gap, right? Every statement, either the statement itself or the opposite of the statement has a proof. And that has to be true because the statement has a proof. Like the true or false, right? So if the statement is false, the opposite must be true. That means we can find a proof, right? So that's complet. Okay. So it's basically a system where you probably can hardly say that a code exists because you cannot either prove or disprove it. Sorry, if you have a question here, I need to catch that. It's a system where you can hardly say that the code exists because it's very hard to either prove or disprove it as far as I know. If the code exists, gold. If the gold exists, God. God. God. Oh, right. Okay, well, I will not go there. Okay, let's talk about the last property, decidable. So that's the one we're going to focus on. Decidable means given a statement s, right? The decision algorithm procedure will tell you whether s can be proven, whether there exists a proof for s. So let's say in this case, let's say we know 1 plus 1 equals to 2, right? So there is no proof of 1 plus 1 not equal to 2. So if you give this statement to the decision algorithm, it would say, yes, there is a proof. It may not tell you what the proof is though. It will just tell you there is a proof. If you give it this statement, it will tell you, no, there is no proof of this. Okay, so that's what decidable means. Okay, so just move on to the next person. So the adult famously showed that mathematics is, if mathematics is consistent, then it must be incomplete. So he exhibits a statement, some statement g, so the girl statement g, where g and not g both have no proofs. So there's no way to get from here to any of these. That's the girl's statement. And similar to the liar's product, it's like, I am telling the lie. So there's no way you could find a proof to any of these. So the condition is the system has to be strong enough to have it. Ah, yes, that's right. That's the technical condition that to create this image, you need to have some components of arithmetic to do addition and multiplication. But anyway, that's the technical condition needed for girl's proof. And generally, you would expect a system able to formalize mathematics to have that properties. So we'll assume that is available. Okay, so that's our question. So we don't have complainers, basically. So does it mean we don't have decidability, which is what, Turing? So at that point, there was this 1931. After that question became, is it decidable? So complainers were settled by girl. So we could still be decidable because if you feed g to the decidable, the decidability algorithm, it will tell you, no, there is no proof. And if you feed it not g, you'll also say, no, there is no proof. That technically is possible. But it's possible to still have a decision algorithm. But what most mathematicians didn't think so, didn't think that we have a decision algorithm. So most of the work went to trying to show that there is no such algorithm. So there are systems where neither g or not g has a proof, and they are very practical. So you can express a lot of sentences and a lot of claims that make sense, and you still can decide whether... Of course, yeah. So mathematics is... Once you have some arithmetic, you have this situation, right? So I'll skip next even to answering. I think the story was in 35, he was sliding in a field. He actually heard about incompleteness from Newman, who gave a lecture on this at Cambridge. So in 35, he was lying in the field somewhere in Cambridge, and he thought about how he would go about it. And sort of a story goes, he was inspired by his mother's typewriter. So the problem with the Einstein's problem was, how do you say there is no algorithm to do something? They don't call it algorithm back then. I guess there is no mechanical procedure to do something. There is no procedure that can tell us whether a statement has a proof or not. So the real question was, at least the big thing you have to solve before you could attack the problem, was how do you define what you mean by a mechanical procedure? So Turing was probably influenced by the typewriter, I think, in his design of the automatic machines. So he calls it the A machine or the automatic machines. And I'll again try to sketch out what that is. You erase this. So you see why it's related to a typewriter. Sketch it out. So the machine has an infinitely long tape. The tape is, this is your in-life memory. And you can see this is how on the piece of paper you have this infinitely long way that you can type letters into the paper of a typewriter. So you have, I guess you have some kind of a hit in this kind of object here. So basically you have this tape and you have this machine. And you could move it one step to the left, one step to the right. And then there's some state here that controls. And then you can write to the tape. You can write a symbol like the number one. And you can also erase the tape. Erase just means write the blank symbol. So you can overwrite what you wrote on the tape. It turns out that not to be necessary for Turing completeness. You could actually only have a tape that is write once and still do it. But it's easier if you allow people to just overwrite the tape. It looks like kind of how RAM works. They can just write in location of RAM over and over again. Turns out that's not necessary. Anyway, so this is the thing. So to demonstrate this machine in the paper, he developed some programs. Or what we would call programs. So his machine is like a weird programming language basically. He developed some programs. He saw he only considered one kind of program. The program is only going to print the numbers zero and one. It's kind of interesting. So this was in a time where people were making decimal machines. So binary wasn't a very popular format in the 30s. But he started using binary. So this program would print the decimal expansion of a number between zero and one. A real number between zero and one. Let's say, for example, one-third. One-third is the binary expansion is 0.010101 and so on and so on forever. Right? You can just trust me on this. This is the binary expansion. So this is also equal to one-quarters plus one over 816. 816, yeah. 64, right? Sorry, 64. 64. It's this infinite series. So this is one-third, right? So this is the first program we write in the paper. I'll also write it here. But I write it in a different form than C here on the stand. So you basically have a symbol. Oh, it's kind of a symbol program. So I'll just draw, like, a square for the blank symbol. So you start in the state A. And for the blank symbol, you would print one into zero. Move to the right. Go to the state B. Go to the state B. You would print the one to the right. Go to the state A. So the way you look at this is this is the symbol on the tape that you are reading. And this is your state. I would assume we always start in A. So the triple, the triple in the square means, so if I, if I've got state A and I see this is the state of the logic box. So here's the symbol blank. This is the first thing is the symbol you will print. So you print the zero. You can just sort of simulate this on this imaginary tape. So you print the, basically, I'll print the zero. That's the head. And I'll move the head to the right one step. You can only move one step over here. Then the next state will be B. Right? And then we can start over from B. You see the blank symbol? You can print the number one. Move the head to the right. And then we go back to state A. And so on and so forth. And you can sort of visualize this will print zero one forever. So this is the first program. Turing's version is a little bit more complex. He actually leaves all the, all the even spaces empty so that you can actually do some computation. You can use it for scratch space later. So he separates between the space where you print the numbers, which he only writes once. He actually leaves these space between them. So you can store some stuff for doing computation later. For more complex programs. So this is three. And then, okay. And here comes the title. So he calls all these numbers, these numbers which can be generated from Turing machines, computable numbers. So you call this computable numbers. So that's where the term comes from. So this is examples he used in the street that he has created a general machine. So to fix the number. So we have, we have natural numbers, right? So natural numbers. One, two, three and so on. Then we have the rational, right? So kind of n over n. Then we have algebraic, which is the solution of some equations. 8x squared equals 2. So square root of 2 is not great. And we have, before Turing anyway, then we have transcendental. Which is sort of everything else. Like pi, e and other stuff. It's transcendental. So they are not some equations. And they have very complicated decimal equations. So this is what we know about. And then we have, yeah. This is the rest of all the real numbers, anyway. So Turing's computable numbers actually forms a new class. It is strictly bigger than all the classes we've known so far. Which is bigger than algebra. Each one is subset. So natural is such a rational, rational subset of algebra. And so computable is a superset of algebra, to some extent. It includes all the algebra numbers. And it also includes these two. It includes pi and e, right? Because you've heard of people calculating pi to like 9 million digits, right? So they obviously wrote some kind of program. And of course you can write that program in this form. It's just going to be very cumbersome, but you could do it. So pi is also a computable number. So that gives credit to the fact that you can generate lots of different numbers. Okay, so that's the Turing machine. I showed you the program for one-third. Okay, so yeah. So why did this model became so much more popular than the other computing models of computation, like the lambda calculus and the general recursive functions? So the main thing it's going to do with the way Turing argues about why this particular model, this hit and this tape, then you can move left and right and all that. Why this system is able to express all the possible functions that you can want to ever compute, right? And this became eventually known as the Turing thesis and later on because church was the first person to came up with a model, became known as the church Turing thesis, right? So it says that these models capture all that we can do with computation. So the first thing he talked about was an appeal to intuition, which is he thought about how someone, let's say a mathematician working on a tag writer would be able to solve a problem, right? So he would have some finite states of mind. He thinks about different parts of the problem. He would type things onto the symbol. So he would obviously use discrete symbols and a finite set because if you have infinite number of symbols, you would get confused over them because there would be two that looks pretty much the same to the human eye, right? So it's to do with sort of human finite and limitations, right? So you have discrete set of symbols, you have finite states of mind, right? Which is the finite states of the machine. And then you can do those simple operations, move left, move right. The second is he also gave a machine that can take a description of a formal axiomatic system and enumerate all the theorems, which is all the segments that can be provable. And you can see how you can write such a program, right? You can just start with the axioms and keep applying the rules of inference and the axioms in different order, right? And you get all the possible theorems. But of course this program will never complete because the set of theorems is infinite, right? But you could generally enumerate all of them, right? Looking at the tree of all the ways you could apply the axioms to generate proofs. So that's that. And lastly, of course, he showed that beyond what I've talked about here, that large class of numbers are computable. Obviously you can see that numbers based on some kind of summation, some kind of infinite series is clearly going to be computable because you can write a program that adds up the terms of that series and produce the numbers, right? And famously numbers like pi and e are computable. Okay, so that was the first part, right? That's the Turing machine. Oh, and one last thing I sort of wrote down and forgot to put in my slides was the Turing machine, I think it was something that's not mentioned here in other places, but the Turing machine has enough knobs for theoreticians down in the years to actually modify the model and make changes and add restrictions to it. People have things like two tape Turing machines or when you can't write to the tape, you can only write to the tape once. So you can all these little tweaks to the base model and you can develop a theory around it. Whereas some of the other models are like, for example, churches on the calculus is so simple and you can't really change it very much. So it's hard for people to produce more work based on this because the model is... Yeah, I don't know. That's just an observation, I guess. Statement is not entirely true. All kinds of models of computations have been modified because it's an interesting way of checking. For example, church Turing diagnosis. Right. So there are other lamps that can apply that look... Yes, of course you can. It's just that it's not as easy to think of all these modifications as you would with a Turing machine because it has this physical embodiment which you can sort of imagine what if instead of a linear tape, I have a grid and so the fact that it is more complicated actually may have helped it, I think. But I can't say for sure, of course. That's an observation. Okay, so let's move on. Okay, so that's the last point I have about this. So the universal machine that's the... And basically what we know today, I guess, if you've seen it, right, your programs like PiPi and all this, which is basically a program written in the same language which can execute programs of the same language. So Python and interpreter written in Python would be one example. And this is, of course, the Turing machine interpreter written in the Turing machine language. Right. So examples of how you look like. This is not the one from Turing's paper because this is written in a very odd notation. This is from, I think, Uman and Hopcroft's book on automata. So this is also one version of a universal Turing machine. It looks something like this. So we have designed some way to encode all these tables as the tape, right. The tables now become written onto the tape and there's a different... from the huge table that reads the program on the tape and execute the program and produce the same result as the program running directly on the hardware. And one thing I think you notice if you read the paper, Turing talks about the idea of skeleton tables I think you notice in such a big program you would have sub portions of it that are similar but the symbols are changed just like you have functions. So to solve that he created something called skeleton tables which helps him write out the whole UTM table and he could sort of substitute a symbol. It's kind of like a macro in modern terminology because Turing machines don't have functions but you can have kind of like a macro like a template for a table and you could substitute maybe instead of having a state A you could have a state X and you could fill in the state later to say well after this jump back to that other thing so you could have a template and you fill in the instead of X use a D so whatever that was X it became the state D to help him write the whole thing down because it's quite complex and there are some routines that you want to reuse here and there okay okay and I get to the last part which is the most interesting part I guess which sort of as a start of this whole field of computability theory which is a study of what kind of problems that computers could solve so Turing's work started to show the first problem so I keep the hint away he showed an uncomputable number actually okay so let me now show you this the proof and that will be the last part of the problem the first part that sort of thing let us know that computers cannot do everything okay so okay so based on the universal Turing machine and all the words there we know that we can write Turing machines as onto the tape and further we can also show that you can convert that to a number basically or you can sort the strings you can present it as a driving string on a tape the program the program key the machine this is Turing's we always say machines have some tape somewhere so you can sort of rearrange you can sort by the length and do an electrical watering of all the machines that is reasonable let's look at only the machines that produce computable numbers which produce an infinite sequence of decimal numbers so we have machine M1 M2 M3 and so on so these are the machines that produce an infinite list of zeros and ones that represent the computable numbers so let's say this is the sequence we produce this is the one third machine 0101 010 010 and so on 010 010 so let's just go so far so the cost is not many of them so we're going to appeal something called the diagonal argument which is a very famous argument from Kentor to show that there exist numbers that the machines cannot calculate so that's a very standard way to do it so let's call this beta so beta is defined in such a way that let's say the first the first machine and the first digit so beta is going to be the opposite of this so this is 1 this is 0 and we take the second digit of the second machine it's called the diagonal method we look at all the digits on the diagonal we take the second digit of the second machine and we take the opposite so it's going to be number 1 we take the third digit of the third machine and we take the opposite and we get a 0 and we can do this for the rest of the digits so beta is the number we can define this with it turns out that beta is the number that we cannot compute so there is no machine that can generate this number beta so why is that? so suppose there is some machine so that machine must have appeared somewhere on this infinite list of machines let's say mk is this machine and if mk would have so 1 1 0 at some point that would be the kth position of this mk machine but obviously we know that beta is going to be different from this from the way we define it so if this is going to be a 0 and beta has to be a 1 so that means it couldn't be the mk machine or it could be any machine anyway so that is very important and that's how this is all of what you need is a it's a number but it's not computable so this shows that there exist numbers that are not computable which is not perhaps surprising because again if you follow from if you look at set theory so the set of computable numbers is what we call countable because we can do a 1 to 1 mapping not 1 to 1 but we can map this in relation to the natural numbers so actually in terms of tonality of a set it's the same as the natural numbers so what is this argument? why is set of computable numbers countable? never mind actually not currently important in this particular proof because if it would be uncountable it would be continued but it's countable because we can encode all the machines as a number so it's a subset of natural numbers if you look at just the description of the machines so every machine description has to be finite yes I think that's a sort of an assumption here but basically you can encode machines as strings and you can translate this string as a high dimensional number like base 256 or whatever so there is a subset of the natural numbers of course not all numbers represent valid machines so only a subset of the natural numbers represent valid machines so beta is a number and what is that that's not so useful on it's own so Turing also defines two terms we call circular and circle free it turns out to be important so actually we're interested in circle circle free so circle free machines are those that print the digits 0 and 1 forever so circle free all these are circle free machines circular machines are machines that print only a finite number of digits and at some point gets into some get stuck I guess circular means get into some infinite loop because Turing's machines never stop there is no state that says do nothing every state you do something but it's possible to get into some kind of infinite loop basically but when you never print any more numbers so you get stuck is that right? so what we want is circle free but circle free also never stops it goes on forever it just never gets stuck and you keep printing 0s and 1s forever circle free okay let's see how we can go with this so we can also show that there is no machine that can read the description of another machine and tell whether it is circular or circle free and that turns out to be the key so this is just a step to that so there is no machine so we mean that in modern terminology you cannot know if they stop or not yes so I think Martin Davis was more popularized to term halting problem this is what people may have heard but Turing never uses this word in his paper but I guess in today's terms most of us write programs but then we think about like server processes they never stop right they are more prophetic than we think but in any case this is also known as the halting problem aka the halting problem but the funny fact was that Turing never uses this word because his machines never stop as you see it has to train the numbers forever but another word for this is the halting problem there is no machine that determines if another machine which is written on the tape let's say it is circle free because this is the one they actually want so a circle free one of the good ones to the speed circular ones are not so good so there is no such machine okay how do we prove this so let's see okay as usual we do the so let's pretend let's pretend there is such a machine very common pretend there is such a machine let's call this machine omega so this machine omega is able to read another machine say M and it returns one if it prints one it is circle free reasonable thing to do so it reads the machine from the tape and if it prints one it is circle free if it prints zero it is circular okay what can we do with omega we can generate beta because then you could enumerate all the machines in lexical order you could ask omega is it circle free if it is you could execute it with the universal machine and figure out the first digit then prints the opposite that would be the first digit of beta then you could turn for check all the other machines until the next machine that is circle free which is M2 and then ask the universal machine to run it to generate the first two numbers find the second digit and print the opposite of it so asking about this argument what if the machine of omega first reads the number of states that machine M has and then checks the number of digits that corresponds to this number of states and whether it repeats after this number of digits that would work sometimes but not all the time why not the machine has finite memory because it has only M states and you say that machine can only write new digits so that's all the memory it has but you could also store stuff on the tape right, let's say the tape has additional memory like before after okay so if it's written writing digits we are able to use all the memory on the tape we can use a finite set of symbols but it's a finite set of symbols the amount of memory we can use on the tape is infinite it's infinite, yes yes in real life in the paper he actually uses alternate positions of the tape so I didn't do it here because it's very confusing so the actual machine with three numbers like this 0, 1, these are squares used for working memory so there's also... it's actually enough you don't need more than this extra space between the actual space of the output to do your store data of course this is infinite long tape so you have infinite of these extra squares okay so and really that's the final thing we get to this okay so there's no machine because if there was we could use it to enumerate all the circle three machines and not get stuck because we're trying to run the circular machines we get stuck wherever so we can enumerate the circle three machines and just print the opposite of what they print and we can print data eventually so this we can use omega to construct yet another machine that could print data we know it's impossible because there's no machine that can print beta by the prior argument right so essentially the anchor is actually beta because this is a number that's un-computable and from there we can imply that this is un-computable and of course further on there's something called rice's theorem that says any property that depends on just the behavior can we construct the machine that prints beta if it's un-computable because we just are good that we cannot construct this machine that also means that we cannot check whether it's circular or circle three exactly yes sorry I didn't go back we can't construct that machine because our original assumption that there is such a machine omega is false so there is no such machine that can do this and there's a generalized version that says not in current paper but further on it says that any non-trivial property of machines suffer from this problem un-computable subsequently Turing also shows in his paper that to determine whether a machine ever prints the symbol zero is also un-computable so he goes ask does this machine ever print a zero somewhere in the future like wherever does it ever print a zero so that property is also impossible similarly does it ever print a one does it ever print any all these kind of properties are undesirable un-computable okay and to the last part so the decision problem in the last part it's kind of straightforward I guess then so basically we can just reformulate these kind of statements in logic right it's kind of cumbersome we can do that we can reformulate these kind of impossible statements we can take an actual machine like an actual machine and figure out whether it's circle and we have a statement like let's say X is circle three or Mx will print some zero at some point in time right these kind of statements you can translate it into the statement of the formal logical system right so similarly there is no mechanism for determining whether this statement is a provable statement which would be that would be equivalent to solving this finding a machine that can solve it so there's no machine that can solve these kind of problems which means the decision problem is unsolvable there's any kind of statement you can with some care translate it into logical terms into a statement of the formal system so Mx is a particular kind of machine it has certain states it's a specific machine you can take that specific machine and translate it into sort of logical constructs and you can formulate this statement in logic if you can throw this to the decision algorithm and ask the decision algorithm can this be proved from the axioms and obviously such a statement cannot be proved because there's no mechanical way that can solve this kind of problems I mean as soon as your logic allows you to express this kind of statement then the logic is not correct again the logic has sufficient faculties to encode some of the rules of how the machines work this idea of states and this idea of some kind of infinitive and so on okay so I think that is the end so okay if you're interested to find out more I would really encourage you to look at the following references I think the first one that helped me a lot was Charles Petal's the annotated term he literally takes Turing's paper and adds a whole bunch of commentary so the whole paper is reproduced in the book but he has additional paragraphs which gives commentary on the work the second one is the Universal Computer by Martin Davis which traces the line of development from light nits all the way down to Turing and further down the line to von Neumann and folks that actually built the first physical Universal Computer because this Turing's work was the conceptual right it showed the way that we can produce such machines subsequent work has been to the engineering side how do you actually build it out of physical parts like vacuum tubes and stuff like that so that's Martin Davis's work of course 2012 was Turing Centenary and lots of videos and lectures at times you could go for Turing Centenary there's lots of stuff on Turing's work not just his work in computer ability but also his work in biology and mathematics right thank you for your time if you have any questions yes my question is kind of general maybe it's a bit simple go ahead please actually like when you're going to a computer science course and stuff so there's this idea of a finite state machine so you can analyze the algorithm by treating it as a finite state machine by optimizing the finite state machine so what's the relationship between a finite state machine and the Turing machine okay interesting so actually a finite state machine is very similar so if you look at the Turing machine so besides the tape which is infinite the logic in the box is actually some kind of finite state machine actually but this finite state machine is actually normally in finite state machines you don't get to write to an external tape there's no write so it just jumps around internally going to different states so the Turing machine in addition to the control it has a way to remember things because you can write it on the tape and you can move it back and forth and come back to it later on so it has a memory so a finite state machine doesn't sort of have a memory for say it has finite memory whereas the Turing machine has potentially infinite memory because the tape is infinite at all so if your finite state machine has like 10 states basically your sort of memory is basically the state you are in you sort of encode the memory in the states because the Turing machine has an extra memory but also like the table that you showed A and B state A and state B so that is actually a finite state machine which is making it back to the idea that you can actually tweak the model a lot so you can throw away the tape basically and what you get is what we now call the finite state machine so and if you put in like a stack instead of a tape you get a different kind of thing push down the top of the top if you look at the top of the top theory it studies all various kinds of machines that takes this basic model and tries to rearrange it so instead of a tape we have a stack of other things it could still work right it may not be as powerful as this but you can still do certain things so FSM is basically throwing the tape also like all these concepts more or less descended from Alan Turing's paper I would not be so sure about finite state machine but you could see that as a connection I'm not sure that Alan Turing's paper was the first paper that led to this kind of work about FSM and all this kind of things but I would think the notion of studying restricted forms of computation like finite state machine would be a variation of trying to restrict the amount of resource because a lot of computer science is about looking at solving problems with limited resource so finite state machine needs no memory there's no RAM that's just like chips so what kind of problems can we solve with just finite memory instead of having a large RAM to do stuff I guess it could be I guess it may not be directly related but the idea of studying finite limited resource machines certainly is a variation on Turing's work how do we know that so this might be in the paper how do we know that circle-free automata based on Turing's model of computation that they will have a limit to what they produce for example let's say you imagine this thing which has a zero maybe it will start at a certain place just wait for what anyway it changes it to a one way and goes to the next stop it's a zero, it changes it to one let me try let me take a photo maybe we take it offline I guess separately yeah I think there are no more questions thank you