Language of thought hypothesis

Language of thought hypothesis

In philosophy of mind, the language of thought hypothesis (LOTH) put forward by American philosopher Jerry Fodor describes thoughts as represented in a "language" (sometimes known as mentalese) that allows complex thoughts to be built up by combining simpler thoughts in various ways. In its most basic form the theory states that thought follows the same rules as language: thought has syntax.

Using empirical data drawn from linguistics and cognitive science to describe mental representation from a philosophical vantage-point, the hypothesis states that thinking takes place in a language of thought (LOT): cognition and cognitive processes are only 'remotely plausible' when expressed as a system of representations that is "tokened" by a linguistic or semantic structure and operated upon by means of a combinatorial syntax.[1] Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought. Syntax as well as semantics have a causal effect on the properties of this system of mental representations.

These mental representations are not present in the brain in the same way as symbols are present on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. LOTH has wide-ranging significance for a number of domains in cognitive science. It relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual holding the propositional attitude, and it challenges eliminative materialism and connectionism. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate.

Contents

Presentation

The hypothesis applies to thoughts that have propositional content, and is not meant to describe everything that goes on in the mind. It appeals to the representational theory of thought to explain what those tokens actually are and how they behave. There must be a mental representation that stands in some unique relationship with the subject of the representation and has specific content. Complex thoughts get their semantic content from the content of the basic thoughts and the relations that they hold to each other. Thoughts can only relate to each other in ways that do not violate the syntax of thought. The syntax by means of which these two sub-parts are combined can be expressed in first-order predicate calculus.

The thought "John is tall" is clearly composed of two sub-parts, the concept of John and the concept of tallness, combined in a manner that may be expressed in first-order predicate calculus as a predicate 'T' ("is tall") that holds of the entity 'j' (John). A fully articulated proposal for what a LOT would have to take into account greater complexities such as quantification and propositional attitudes (the various attitudes people can have towards statements; for example I might believe or see or merely suspect that John is tall).

Precepts

1. There can be no higher cognitive processes without mental representation. The only plausible psychological models represent higher cognitive processes as representational and computational thought needs a representational system as an object upon which to compute. We must therefore attribute a representational system to organisms for cognition and thought to occur.

2. There is causal relationship between our intentions and our actions. Because mental states are structured in a way that causes our intentions to manifest themselves by what we do there is a connection between how we view the world and ourselves and what we do.

Reception

Some philosophers have argued that our public language is our mental language, that a person who speaks English thinks in English. Others contend that people who do not know a public language (e.g. babies, aphasics) can think, and that therefore some form of mentalese must be present innately.[citation needed]

The notion that mental states are causally efficacious diverges from behaviorists like Gilbert Ryle, who held that there is no break between cause of mental state and effect of behavior. Rather, Ryle proposed that people act in some way because they are in a disposition to act in that way, that these causal mental states are representational. An objection to this point comes from John Searle in the form of biological naturalism, a nonrepresentational theory of mind that accepts the causal efficacy of mental states. Searle divides intentional states into low-level brain activity and high-level mental activity. The lower-level, nonrepresentational neurophysiological processes have causal power in intention and behavior rather than some higher-level mental representation.[citation needed]

Tim Crane, in his book The Mechanical Mind,[2] states that, while he agrees with Fodor, his reason is very different. A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow is white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. Any symbol manipulation is in need of some way of deriving what those symbols mean.[2] If the meaning of sentences is explained in terms of sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. There seems to be an infinite regress of sentences getting their meaning. Sentences in natural languages get their meaning from their users (speakers, writers).[2] Therefore sentences in mentalese must get their meaning from the way in which they are used by thinkers and so on ad infinitum. This regress is often called the homunculus regress.[2]

Daniel Dennett accepts that homunculi may be explained by other homunculi and denies that this would yield an infinite regress of homunculi. Each explanatory homunculus is “stupider” or more basic than the homunculus it explains but this regress is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation.[2] John Searle points out that it still follows that the bottom-level homunculi are manipulating some sorts of symbols.

LOTH implies that the mind has some tacit knowledge of the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning).[2] If LOTH cannot show that the mind knows that it is following the particular set of rules in question then the mind is not computational because it is not governed by computational rules.[2][3] Also, the apparent incompleteness of this set of rules in explaining behavior is pointed out. Many conscious beings behave in ways that are contrary to the rules of logic. Yet this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act in accordance with this set of rules.[2]

Another objection within representational theory of mind has to do with the relationship between propositional attitudes and representation. Dennett points out that a chess program can have the attitude of “wanting to get its queen out early,” without having a representation or rule that explicitly states this. A multiplication program on a computer computes in the computer language of 1’s and 0’s, yielding representations that do not correspond with any propositional attitude.[3]

Relation to connectionism

Connectionism is a recent applied approach to artificial intelligence that often accepts a lot of the same theoretical framework that LOTH accepts; that mental states are computational and causally efficacious and very often that they are representational. However, connectionism stresses the possibility of thinking machines, most often realized as neural networks, an inter-connectional set of nodes, and describes mental states as able to create memory by modifying the strength of these connections over time. Some popular types of neural networks are interpretations of units, and learning algorithm. "Units" can be interpreted as neurons or groups of neurons. A learning algorithm is such that, over time, a change in connection weight is possible, allowing networks to modify their connections. Connectionist neural networks are able to change over time via their activation. An activation is a numerical value that represents any aspect of a unit that a neural network has at any time. Activation spreading is the spreading or taking over of other over time of the activation to all other units connected to the activated unit.

Since connectionist models can change over time, supporters of connectionism claim that it can solve the problems that LOTH brings to classical AI. These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions, objects in photographs and understanding nuanced gestures.[2] Fodor defends LOTH by arguing that a connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol-manipulating LOT.

Fodor and Zenon Pylyshyn use the notion of cognitive architecture in their defense. Cognitive architecture is the set of basic functions of an organism with representational input and output. They argue that it is a law of nature that cognitive capacities are productive, systematic and inferentially coherent - they have the ability to produce and understand sentences of a certain structure if they can understand one sentence of that structure.[4] A cognitive model must have a cognitive architecture that explains these laws and properties in some way that is compatible with the scientific method. Fodor and Pylyshyn say that cognitive architecture can only explain the property of systematicity by appealing to a system of representations and that connectionism either employs a cognitive architecture of representations or else does not. If it does, then connectionism uses LOT. If it does not then it is empirically false.[3]

Connectionists have responded to Fodor and Pylyshyn by denying that connectionism uses LOT, by denying that cognition is essentially a function that uses representational input and output or denying that systematicity is a law of nature that rests on representation.[citation needed]

Empirical testing

Since LOTH came to be it has been empirically tested. Not all experiments have confirmed the hypothesis;

  • In 1971, Roger Shepard and Jacqueline Metzler tested Pylyshyn’s particular hypothesis that all symbols are understood by the mind in virtue of their fundamental mathematical descriptions. Shepard and Metzler’s experiment consisted of showing a group of subjects a 2-D line drawing of a 3-D object, and then that same object at some rotation. According to Shepard and Metzler, if Pylyshyn were correct, then the amount of time it took to identify the object as the same object would not depend on the degree of rotation of the object. Their findings were that there was a proportionate change in time it took subjects to recognize the object to degree of rotation, disconfirming their hypothesis.[citation needed]
  • There may be a connection between prior knowledge of what relations hold between objects in the world and the time it takes subjects to recognize the same objects. For example, it is more likely that subjects will not recognize a hand that is rotated in such a way that it would be physically impossible for an actual hand.[citation needed] It has since also been empirically tested and supported that the mind might better manipulate mathematical descriptions in topographical wholes.[citation needed] These findings have illuminated what the mind is not doing in terms of how it manipulates symbols.[citation needed]

See also

References

  1. ^ Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/language-thought/
  2. ^ a b c d e f g h i Crane, T. (2003). The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. Routledge, New York. 
  3. ^ a b c Murat Aydede (2004-07-27). "The Language of Thought Hypothesis". http://plato.stanford.edu/entries/language-thought/. 
  4. ^ James Garson (2010-07-27). "Connectionism". http://plato.stanford.edu/entries/connectionism/. 
  • Ravenscroft, Ian, Philosophy of mind. Oxford University press, 2005. pp 91.
  • Fodor, Jerry A., The Language Of Thought. Crowell Press, 1975. pp 214.

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • language of thought hypothesis — The hypothesis especially associated with Fodor, that mental processing occurs in a language different from one s ordinary native language, but underlying and explaining our competence with it. The idea is a development of the Chomskyan notion of …   Philosophy dictionary

  • language of thought — hypothesis …   Philosophy dictionary

  • Language of thought — Jerry A. Fodor s Language of Thought (LOT) hypothesis states that cognition and cognitive processes are only remotely plausible when expressed as computational in terms of representational systems. He uses empirical data drawn from linguistics… …   Wikipedia

  • Language and thought — A variety of different authors, theories and fields purport influences between language and thought.Many point out the seemingly common sense realization that upon introspection we seem to think in the language we speak. A number of writers and… …   Wikipedia

  • language, philosophy of — Philosophical study of the nature and use of natural languages and the relations between language, language users, and the world. It encompasses the philosophical study of linguistic meaning (see semantics), the philosophical study of language… …   Universalium

  • Philosophy of language — is the reasoned inquiry into the nature, origins, and usage of language. As a topic, the philosophy of language for Analytic Philosophers is concerned with four central problems: the nature of meaning, language use, language cognition, and the… …   Wikipedia

  • Language Log — is a collaborative language blog maintained by University of Pennsylvania phonetician Mark Liberman.The site is updated daily at the whims of the contributors, and most of the posts are on language use in the media and popular culture. Google… …   Wikipedia

  • Thought experiment — A thought experiment (from the German Gedankenexperiment ) is a proposal for an experiment that would test a hypothesis or theory but cannot actually be performed due to practical limitations; instead its purpose is to explore the potential… …   Wikipedia

  • Iberian language — language familycolor=Isolate name=Iberian states=Modern Spain and France region=Mediterranean coast of the Iberian Peninsula extinct=1st 2nd century AD family=Language isolate iso3=xibThe Iberian language was the language of a people identified… …   Wikipedia

  • Language acquisition device — The Language Acquisition Device (LAD) is a postulated organ of the brain that is supposed to function as a congenital device for learning symbolic language (i.e., language acquisition). First proposed by Noam Chomsky, the LAD concept is a… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”