 idelity, and we are going to come to the beginning of the second lecture, in this lecture is I am going to discuss about representations. This representation is especially relation to photo concept of mind. How photo is explaining concept of mind in different way. I have already explained about representation in the different models of cognitive mind lectures. Fodor concept of representational theory of mind and secondly is hypothesis of language of thought and semantic engines. Thirdly, computational theory of mind, fourthly, propositional attitudes and fifthly, CRTM or computational representational theory of mind and sixthly, intentional realism. First of all, we will start with representations and representations are about as we know the things other than themselves and are intentional in the sense of being about this or that because the mental representations have content which is related to thought, belief, intentional actions and all they are also intentional and in the sense of being purposive because it stands for something else. Now we may ask a question, what is it that distinguishes items that serve as representations from other objects or events. Secondly, and what distinguishes the various kinds of symbols from one another. As for the first question is concerned, there has been general agreement that the basic notion of representations involve things like standing for or being about referring to and undenoting and something else. Thirdly, some theorists have maintained that it is only the use of symbols that exhibit or indicate the process of mind and mental states. Here we would like to see that mental representations like belief and thoughts constitute the broad domain of cognitive science. They explain how cognition takes place in the human mind. Cognitive science includes linguistic and cognitive psychology and this cognitive science has brought about a cognitive revolution in the study of mind. And here we can undertake two important developments in cognitive science. One is the representational theory of mind. For this to accept the representational theory of mind is to accept that mental representations are very much like inter representational state of a digital computer. The other is the adoption of a computational model of mind or computational theory of mind. In turn, two questions have to be answered in this connection. What kinds of representational systems are employed in cognitions and what is machine intelligence or artificially intelligent. Fodor has answered these questions in his computational representational theory of mind in short CRTM. But if we see the computational representational theory of mind makes strong assumptions about mental processes that is mental processes are computational processes. Therefore, those mental processes are defined over symbols. On computational process, Fodor's view is on computational process are symbolic and formal. They are symbolic because they are defined over representations and they are formal because they are applied to representations in virtue of the syntax of representations. The theory purports to offer a solution to the problem raised by the compositionality of propositional attitudes like beliefs, thoughts, etc. Secondly, it proposes the vindicate of the strong ring of the intentional realist causal thesis regarding the mental phenomena. Again, it may be noted that the CRTM or computational representational theory of mind is consequently based on two fundamental assumptions. The first one is a Fodor's language of thought hypothesis or LOT or the second is the view that psychological explanations that is both intentional and normal. First, we have to see the representational theory of mind shows that any propositional attitudes such as belief, desire, doubts, etc., is literally a computational relations between an organism and some formula in the internal code of that organisms. Here, Fodor states that to believe that such kind of things exist to have a mental symbols that means that such and such are token in your head in certain ways. It is to have such a token in your belief box. He said that whenever we are believing some kind of mental activities, those activities are existing in our belief box. It is in virtue of this system for representing and processing information that mental states are related causal to one another. Moreover, according to Fodor, propositional attitudes as relations between organisms and internal representations are precisely the view that the psychologists have independently arrived at. By providing a relational treatment of the propositional attitude, it is possible to state how they are contentful. For the relational treatment, propositional attitudes are dietic relations and are the internal mental representations. The belief descriptions, statements are true only if the organism stands in a believed relation to the traditional contents. Firstly, it is naturally believable that propositional attitudes are relations. For example, when John believes in something, it seems that John stands in relation to something that is the object of the belief. Here, John's belief and the objective belief which is like one to another. Secondly, existential generation applies to the verbal propositional attitudes. For example, if John believes it is raining, then we can undoubtedly say that there is something that John believes which shows that the belief is a relation between John and something that he believes. Fodor is realistic about intentional and propositional attitudes. The main point of his theory of intentionality is that intentionality is primary and is originally a real feature in the brain. Language is intentional only in the secondary sense, not in the primary sense. Therefore, intentional is the primary and language is the secondary. This language is intentional only in so far as some of the sentences which are uttered in our natural languages and describes the real feature of the mind of oneself or others. This is the fact that which helps us in generating our natural language and also special vocabulary in language that involves employment of verbs operating over a propositional content. For example, if we said that John decided to stay at the bus stop rather than make a run to the local police station because he believed that certain things had certain desires, came up with a certain evaluative decisions. Then we are describing a series of real processes in John's brain which involve propositional operations over propositional contents. Here John really has represented to himself the possible behaviour scenario that we should stay at the bus stop and that we should run from to the local police stations. He has also represented to himself a wave of more general beliefs and desires which he has correlated up with these two behavioural scenarios and which make these behavioural scenarios relevant and plausible and sensible solution to his problems. That is, he has also evaluated these behavioural scenarios in such a way that he can be said truly and literally to decide on one of them for good reasons. From the above example, we find that John's mind must be able to make use of some medium in terms of which he can represent the behavioural scenarios that is belief, desire and etc. And John's brain must have a language of thought whereby propositional contents of beliefs and decisions take place. Content of beliefs and decisions and other propositional attitudes are first represented and then operated on or processed in the individual ways which go to form the different propositional attitudes. Same propositional contents, for example, that there will be rain, could be the information content of two different attitudes. One can believe that there will be rain and hope that there will be rain or believing it but not hope for it. According to Fodor, the theory of propositional attitudes is required to meet some conditions. There are basically two such conditions. First, theory of propositional attitudes must explain the parallelism between verbs of propositional attitudes and verbs of saying. Secondly, it must explain the opposite of propositional attitudes. By parallelism between verb of saying and verb of propositional attitudes, it can be shown that John believes that it is raining and its corresponding in verb of saying. Namely, John says that it is raining, exacerbates isomorphism in syntax, semantics and logical form of structure. The opposite of propositional attitudes is a complex phenomenon understood in terms of the following three characteristics. Firstly, statements containing verbs of propositional attitudes are not truth function of their components. For example, from the truth of the declarative sentence, that is to say that George Orrell wrote Animal Farm. We cannot compute the truth of the statement that John believes that George Orrell wrote Animal Farm. Secondly, though the declarations warrant existential generalizations, a statement occurring as the object of verbs of the propositional attitudes does not warrant such existential generalizations. For example, from the truth of the statement, George Orrell wrote Animal Farm. We can infer that there is a George Orrell who wrote Animal Farm, but from this we cannot infer that there is a George Orrell, John believes that George Orrell wrote Animal Farm. Thirdly, the opposite of propositional attitudes is that in the case of propositional attitudes, the principle of substitutions fails. The principle of substitutions says that given true statement of identity, one of its term can be substituted for the other in any true statement where one of the terms of the statement of course and the resulting statement is true. This is all about representational theory of mind and Fodor has been arguing that this representational theory of mind is represented in a completely semantic can be realized even if in the syntax. That will be very clear if we see the second point that hypothesis of language of thought and semantic engines. Here the semantic engines they have been arguing that mind is a kind of semantic engines. The representational theory of mind arises with the recognition that thoughts have contents carried by mental representations. The representational theory of mind arises with the recognition that thoughts have contents carried out by mental representations. For example, John believes that snow is white here. John's mental representations or thought has the mental content that is snow is white. As we know that there are different kinds of representations such as pictures, maps and many other picture or forms which is referring to something. Even if a road signals whenever we are travelling somewhere else and that signal stands for something whether curve is there, whether down is there, whether the road is up is there and this kind of signals are everywhere in the existing. Those things are stand for something which are referring to something. In this case we are talking about only mental representations. Sententialism distributes itself as a version of representationalism by positing that mental representations are themselves linguistic expressions within a language of thought. Moreover if we on the other hand if we see some sententialist point out that the language of thought is just the thinker's spoken language which is internalized and other identify the language of thought with the mentalist. That is an unarticulated and internal language in which the computation occur. Therefore the internalized language is which is unarticulated and mentalist are unarticulated internalized language which is not existing in the written form which is existing in our brain only according to photo. Particular belief may be true or false. Beliefs are relation to mental representations and then belief must have seen relation to representations that have truth values among their semantic properties. And if sententialism says that mental representations have truth values, we could really account for the true valuation of mental representations. Here belief plays a central part in reasoning. As we can say that reasoning is a process that attempts to secure new beliefs by exploiting beliefs. And reasoning all these reasoning would preserves the truth belief by being the manipulation of truth value sentential representations according to the rules. Therefore the sententialist hypothesis is that reasoning consisting in a formal inferences, it is a process turned primarily to the structure of mental sentences. Then reasons are things very much like classical programmed computer. Why it is like classical program computer because which is there existing in the formal inferences which are mental sentences according to photo. And again photo also says that he has been that this thinking is one kind of mechanically it is a kind of thinking is a systematic and productivity. How this thinking process is systematic and productive now we have to see that. For example John believes that whether William is taller than Russell and this implies that John is capable of considering that Russell is shorter than William. More clearly the fact that John can have some thought and tells that he can have certain other semantically related thoughts. Now the question is how is this semanticity possible because the semanticity is very difficult to explain in a systematic way. Believes that any kind of semantic system is there in the human brain is one kind of semanticity and semantic is there already in the human brain. Whenever we are explaining something and we understand that and we produce the secondary activities, this will be very clear if you see this example. Suppose that John's thought that William is taller than Russell's involves the registration of William's is taller than Russell. And here the first sentence is creating the second sentence which is productive as well as a systematic also. This kind of sentences photo says that this mentalist sentences is itself a complex representations containing simpler representations. As complex mental representations say that mental sentences results from processes ultimately defined on mental words and expressions. Therefore if John can produce William is taller than Russell he must access to William's Russell and is taller than Russell. And if he has these mental representations then he is capable of producing Russell is is shorter than William. Thus sententialism posits that mental representations are linguistically complex representations whose semantic properties are determined by the semantic properties of their constituents. And here productivity and systematics run together. If you postulate mechanism adequate to account for the one then we we get the other automatically. And here as if the belief system is working in the automatic way in the mechanical way. The question is that what sort of mechanisms are there or what sort of mechanisms are existing in this case. The sentences of a natural language have the combinatorial semantics. On this view learning a language is learning a perfectly general procedure for determining the meaning of its lexical elements. Linguistic capacities cannot help but the systematic on this account because it gives the very same combinatorial mechanism that determine the meaning of all the rest. Language express thought and thought is systematic as language is according to Jerry Fodor. Therefore to have the thought that is William is taller than Russell is if so fact to have access to the thought that Russell is shorter than William. Of course anybody who is in a position to have one of these thought is if so fact is a position to have the other. Therefore the language of thought explains the systematicity of thought which is an essential requirement of the structure of language of thought. If you see in the mind there are two boxes according to Jerry Fodor. One is belief box and the second one is desire box. The language of thought hypothesis is a speculation on the form that storage take place. Our beliefs and desires are encoded as sentences. According to Fodor our sentences that we think are not the English sentences or any sentences in natural languages. It may be any language. It may be either Oriya language or Marathi language or Hindi language or Sanskrit language. It may be any other languages which is occur in our sentences. Any kind of natural language is sentences. Our thinking of course in a special languages according to Jerry Fodor and that language called as mentalism. Mentalism is organized into words and sentences and according to him mental is words are concepts and mental is sentences are thoughts. The sentences of mental is are stored in a neural medium because pattern of neural activities could develop sentential representations. Here Fodor's language of thought fit with the multiple eriraility arguments because according to Fodor cognition has nothing directly to do with its specific neurological embodiments but rather concerns processes operating on the common language of thought. Therefore this language of thought is like multiple realizability model. In the case of multiple realizability model of cognition or mind as we have seen that mind can be represented in different ways and different system can be realized in one systems and when mind is functioning in the multiple way the way even if a computational system is functioning. Therefore this language of thought has kind of this commonality with multiple realizability model of mind but according to Fodor he say that cognition is neural but cognition is proportional in mentalism. The way even if multiple reality model is that even if the neural is there and the computational processes are there and we can imagine a device that could manipulate sentences without regards to their meaning. Such kind of devices according to Hogland is semantic engines and this device would perfectly mimic the performance of native speaker but would do without relying as a native speaker would do on the meaning of the same manipulated sentences. And here Hogland trying to argue that mind is a kind of machines and even if machine is functioning syntactical way but semantic is there because this semanticity is there. The way mind for the human mind is like a semantic engines and that semanticity is possibilities are there according to Hogland. Those sentences may express in the propositions but the device cares only about their shapes that is syntax. In this way Hogland says that if you take care of syntax semantic will take care of itself. Now the question is is such a device possible according to Hogland not only semantic engines possible but there exist. An ordinary computing machine is a semantic engine according to them. We design and program computers so that they manipulate symbols in accordance with purely syntactical rules. The symbols are meaningful to us but the machines that deploy them can nothing about this. They operate uninterpreted symbols but in a way the honor semantic constraints. And the question is now how can syntax they have been arguing that even if there is a close relationship between syntax and semantics. Now the question is how can syntax mirror the semantic according to them the formal logic is the best example for this syntax semantic relationship. So for example if P then Q P therefore Q this rules tells us that if we have a particular configuration of symbols we are permitted to write new symbols. Here what is significant about the modus ponens rules is that it is formulated and deployed without regard to semantics. But the rules makes sense that is to say they confirm to the semantics of inference. Let us see the concrete examples and the concrete example is like this. If it is raining then I shall need an umbrella then P stands for it is raining and Q stands for I shall need an umbrella. P it is raining therefore I shall need an umbrella. Thus according to them from this concrete examples from this modus ponens rules they said that formal logic mirrors this kind of semantic knowledge in rules and the application of which require no semantic knowledge. The question is what has this to do with minds with this modus ponens example. To explain the human mind by supposing that minds manipulate mental representations we need a sentence in the language of thought. And if mind manipulate sentences then this question would seem to require a sentences under standard some component of that mind inputs the symbols. And another job of the mind is to understand sentences in the language of thought and in the language of thought. And against this background it is easier to apply the notion of semantic engines. As we have already mentioned that a semantic engines is a device that performs symbol operations in a way that reflects semantic relations holding among those symbols but does so exclusively by syntactic principles. We get the semantic it is because of the syntactic principles. In the same way we can also suppose that mind contains mechanisms which understand the meanings of these representations. And therefore if the mind is a semantic engines engine is realized by the brain. If the mental operations include the manipulate of symbols that is sentences in the language of thought then the embodiments of those symbols in the brain need not resemble the symbols we can write with pen and paper. They might involve subtly electrical or chemical states if there is a language of thought its sentences are invisible from the point of view of observer examining the microstructure of a brain. Therefore it is very difficult to observe or examine the representational theory of mind from this microstructure point because it is internal which is existing in the invisible way. And it is very it is invisible to the observer knowing about the mind and that which is actually existing in the microstructure of a brain. Now we have to see the computational theory of mind states that human mental processes are computational processes. The theory of computation is very important. The basic background of this theory of computation become more popular if you see after the publications of Alan Turing's famous article on computing machinery and intelligence. And Turing thesis states that minds of intelligence provided to be a strong support of the computational theory of mind. Turing machine is said to be a program in abstract symbolism although we have seen in the different cognitive model of mind that I have briefly explained what is this computational theory of mind. We will be explaining very largely in relation to how this computational theory of mind plays important role in philosophy of mind especially in contemporary issues in philosophy of mind and cognitions because which is the one of the most important and scientific explanation of the computational thesis of mind.