Friday, September 17, 2010

Later that year I wrote this one

Corpuscularianism and Artificial Intelligence

Modern day corpuscularianism takes on the views of Harre’s Principle of Structural Explanation. He says that everything in the world that we know of can be reduced down in terms of “relations among a small number of elementary units” (Harre 164). In other words, everything that we perceive and experience in the world such as taste, smell, or observations in any other way would all come down to a change in the direction or degree of motion of the basic elements involved. He also suggests that there are two relationships between properties of parts and properties of the whole. One says that the property of the whole is the sum of the properties of individual parts within the whole. Moreover, he states that another relation between parts and the whole is emergence, in which aggregates have properties that are not properties of the individuals from which it consists. These emergent properties are usually “explained by characteristics of the structure, not just by the components that enter into the structure” (Harre 145). For instance, if we were to relate the behavior of a crowd to the behavior of its individual members, the characteristics of a crowd would not be the additive functions of the characteristics of its members.

Contemporary corpuscularianists believe that structure is responsible for the explanation of properties of an entity. If an alien, whose brain is systematically different from ours, and a human are both put in a state of pain causing each to experience “distress, annoyance, and practical reasoning aimed at relief,” then a functionalist would suggest that even though the alien’s internal makeup may be very different from that of a human’s, the alien’s state of pain would nonetheless be identical to a human pain state (Churchland 36). In this situation regarding the alien, corpuscularianists would say that the pain state experienced by the alien is completely different from that of the human’s because their physical structures are dissimilar, thus these two views are inconsistent. Turing-machine-functionalists believe that since any given mental state cannot be broken down to the physical mechanism that causes it, then mental states must be something more than the merely physical. Furthermore, it also implies that “mentality is not the matter of which the creature is made, but the structure of the internal activities which that matter sustains” (Churchland 37). For instance, human mental states are not restricted to human biological systems like our brains, instead to, but not restricted to, feelings and the state of being we experience under certain conditions.


In his The large, the Small and the Human Mind, Penrose argues against Turing-functionalist view that the human can be modeled by a Turing machine. He generalizes various viewpoints which one can take about the relationship between conscious thinking and computation into four categories: A, B, C, and D. Type A simply states that all thinking is the carrying out of some computation, and if you carry out the appropriate computations, awareness will result. Type D eliminates all such possibilities and says that awareness cannot be explained by physical, computational, or any other scientific terms. Types B and C are defined to be somewhere between the extremes of A and D (Penrose 101). Turing is a type A person who must have believed that mathematicians are essentially computers while carrying out algorithmic procedures in order to ascertain mathematical truth. He holds a view similar to the computational theory of mind which states that if something can perform the functions that depict a person, then it must be a person regardless of what its structural makeup is. Penrose says this is simply unsound because “one should not be concerned with how one might get inspiration, but how one might follow an argument and understand it” (Penrose 112). Furthermore, Penrose believes that consciousness is something global, thus any physical process responsible for consciousness would have to be something with global characteristics, such as large-scale quantum coherence, and for it to be possible, it needs a high degree of isolation. For that reason, he speculates that there must be some type of quantum oscillation of isolated mass taking place within the microtubules of neurons (Penrose 131-33). On the whole, I think Penrose does not give up Harre’s Principle of Structural Explanation because he does think that the cause of consciousness is the result of something elementary that goes on at the quantum level, although he does also speculate that whatever is responsible for this consciousness must be somehow isolated from the rest of the brain.

The Principle of Structural Explanation states that for everything that goes on, we can break it down and explain it in terms of the most basic “bits and pieces” of it’s components and from there moving upward from micro-kinds to chemical kinds, to biological kinds, to neuro-kinds, and then all the way to mental kinds of state. Alternatively, downward causation leads us into thinking that supervenience exists both on the upward supervenience and a downward causal order starting from the mental kinds. Returning to the functionalist’s experience of pain, if one accidentally received a paper cut, first the sensory quarks and neurons would fire upwards to reach the mental state until one feels the emotion of pain, this then may in turn emergently cause the person to experience annoyance, which is another emotion. This time the mental-kinds will downwardly cause the lower levels to experience pain. I believe downward causation is very real and for it to exist our brains must have the ability to run in reverse so-to-say, so the physiological components can carry out all these procedures. An example of downward causation in everyday life is stress or nervousness being experienced at the mental state; as a result you may get an upset stomach or increased heartbeat, or even cold sweat without any justifiable physical cause for them.


In the movie Artificial Intelligence, Dr. Hobby suggests that by mapping the impulse pathways in a single neuron one can construct a MECA who can love. If this is true, then it does support the contemporary corpuscularianism because it talks about supervenience from the movement of a single neuron to a complicated emotion called love. I believe this approach to replicating someone who can love is slightly nonsensical. Love is an emotion we learn through our existence on earth by growing up and interacting with friends, family, pets, and our significant others. It is not a code in some set of complicated algorithm that you would be able put into a machine to generate love. Even if one could come up with such a code, persons love in different ways. I believe that no two people can possibly experience love in the exact same way, and no two can love the same exact way, and thus such a code can not be generated.

Churchland, P. Matter and Consciousness, revised ed. Cambridge, MA: MIT Press, 1988.

Harre, R. The Philosophies of Science. London: Oxford University Press, 1972.

Penrose, Roger. The Large, the Small and the Human Mind. London: Cambridge University Press, 1999.

Film: Artificial Intelligence. Director Steven Spielberg.


No comments: