Tag Archives: consciousness

The Ghost in the Machine: Consciousness, Computers, and Philosophical Zombies

Can Machines Think?

If you were to have a regular online conversation with one live person and one computer simulating human dialogue, would you be able to determine which dialogue was human?

(This program is called ‘The Turing Test’) To this day, no machine has reliably fooled anyone.  But if one computer was developed to converse in a way that was indistinguishable from a human being, could we say that it has the capacity to think?

We must consider that there are fundamental differences between a mind and a machine. This advanced computer may be able to reply with coherent responses and engage in a discussion only because a human programmed into it a “representation” of human experience (i.e. information about what people do in certain situations). But no matter how much knowledge you provide a program, it will never share the same kind of conscious understanding that makes human beings unique. Though programs possess the rules for language, they do not comprehend it’s meaning.

John Searle proposed a thought experiment called ‘The Chinese Room’ that challenges the idea that a machine with appropriate inputs and outputs can understand in the same way a mind would (the position deemed ‘strong artificial intelligence’). Locked in the room, Searle is given three sets of Chinese writings- a “script”, a “story”, and “questions”. The writings are slid under the door, and Searle is then given instructions in English to correlate the excerpts and respond in Chinese by identifying symbols only by their shapes.

To Searle, Chinese characters are just “so many meaningless squiggles” The responses he gives are impossible to differentiate from those of a native Chinese speaker. In this way the room as a unit functions the same way as a programmed computer, and passes the Turing Test.

"The Chinese Room"

Although the program can manipulate symbols to convey something meaningful to a mind reading its outputs, there is no meaning of the words to itself. There is no intentionality. Human thoughts, unlike AI systems, have semantics.

One may still question, “What about an artificial, man-made machine”, assuming it is possible to produce one with a nervous system identical to ours? John Searle argues that only special kinds of machines, namely, human beings, have the capability to produce what we call consciousness. He maintains that the brain is a “biological phenomenon”, with the “causal powers” to have intentionality. And if we can, as Searle says, “duplicate the causes”, we can thereby “duplicate the effects” . However, philosopher David Chalmers provides evidence against this assertion, demonstrating that conscious experience “is not logically superveniet on the physical”.

There is something “it is like” to have consciousness.

The experience is more than mere brain structure, it is something irreducible- something impossible to understand in terms of our current knowledge of the physical world. Let us take Chalmer’s thought experiment, the example of the “logical possibility of zombies”. These zombies aren’t the mentally inept and flesh-hungry beings we see in the movies, rather, philosophical zombies are creatures identical to ourselves down to the atomic level.

My “zombie clone” processes the same information, and performs the same behaviors as me. She looks out the window and sees green trees and a blue sky and tastes the sweetness of an apple. She can hold an intelligent conversation, reporting the contents of internal states, and perform complicated tasks. However, though she can physiologically perceive the way I do, she cannot necessarily conceive the way I do.

Which is the zombie clone? Can you tell if someone has conscious awareness?

The zombie lacks the qualia, the inexplicable ‘mental feel’, of all these sensations.

The point of the argument is that the possibility of philosophical zombies is “conceptually coherent”, and so the existence of conscious experience is not dependent on thefacts of “my functional organization” or physical structure. An opponent may argue that a thought experiment such as this is an “imperfect guide to possibility”, and therefore not relevant. But we can look at the “phenomenon of a posteriori necessity” to see the connection between the two. We can’t imagine H2O not being water, but we can conceive of water not being H2O- and a zombie world is conceivable in the same sense.

My zombie clone cannot experience qualia. She may know all of the physical principles of light, and every physiological fact about the effects of this 460 nm wavelength hitting the eyes. But the zombie does not know what blue is, the same way a personwho was born blind cannot know. Frank Jackson speaks on this notion in his scenario, “What Mary Didn’t Know”, in which Mary, a neuroscientist/physicist, was raised in an environment completely absent of color.Even if Mary knew every single aspect about the physical processes in the brain of the color red, she would never possess the knowledge of what it looks like. No knowledge of physiology or light frequency would ever allow one to differentiate between the experience of red and the experience of green. The facts of qualia are not physically reducible, and therefore neither is consciousness.

For consciousness to be explained, it would have to entail an entirely new set of laws apart from the known physical laws; it would would have to supervene on some different, unknown properties. As quantum mechanics demanded new sets of rules apart from Newtonian physics without interfering, Chalmers argues that the fundamental laws of consciousness will still be “natural” but different than the physics we know. He states “For physics to explain consciousness would take something extraordinary… but in the end [quantum mechanics] is simply not extraordinary enough.”

“How are we aware we have consciousness?”

“How do we know other human beings are conscious too?”

What Mary Didn't Know

We all have a main intuition within us that “there is something to be explained”. How could we question question the nature of consciousness if it didn’t exist? We obviously must be aware that there is some phenomenon at work. This knowledge however, can only come from ourselves. The idea can only be aquired he through first-person experience- through our own minds.  We can only recognize our own thinking, and sense our own emotions; we can introspect. But we can only derive an idea of another person’s mental state through their actions. We cannot readily observe it from the third-person perspective.

This poses a challenge, since it has been established that we cannot obtain mental facts through objective study, which would include observation of human behavior. Nagel says that if “the subjective character of experience is fully comprehensible only from one viewpoint, then any shift to greater objectivity [takes us farther away from the] real nature of the phenomenon.”  It brings us all the way back to Descartes’ “cogito ergo sum” skepticism.

There is something "it is like" to experience RED

How can we know that the other bodies around us aren’t just “philosophical zombies”?

One might simply conclude that other people have thoughts because they function in similar ways. Some use the argument from analogy to verify the consciousness of animals; since like us, they seem to experience pleasure and pain. Descartes believed otherwise- that animals were only machines as they cannot express anything with syntax (Knowing of the advanced technology of today perhaps would have altered his theory). But behavior is not sufficient to assure the presence of cognition, because the supposition is based on only one case- the individual who speculates the problem of other minds. It is invalid induction; it may just be that you are the only one with a mind, and therefore cannot impose this quality on anyone else.

Descartes says we can only attribute mental states to ourselves, but how do we do this in the first place? Empiricist John Locke says we learn through experience. A child learns the meaning of fear, for example, only when it also learns what it means for someone else to be afraid. To have a mind assumes interaction with other minds.

He says  “When children have, by repeated sensations, got ideas fixed in their memories, they begin to learn the use of signs and speech to signify their ideas to others”.. The meaning of the sensation of fear to the individual is learned when accurately applied to oneself and others. Without other minds, the qualia of the emotion does not have meaning at all.

In order to have communication, people must have the same experience of certain sensations. For example, if I refer to New York City, we both agree on the actual, existing place rather than a mere image of the city in my mind. Locke claims “It is the actual receiving of ideas that gives us notice of the existence of other things, and makes us know, that something doth exist at that time without us”

(Less tangible experiences such as the experience of colors however are a bit more difficult: The word “blue” receives its meaning because of its association with our sensation upon seeing the color, but since, as Chalmers claims, we can conceive of a world in which my sensation of “blue” is really what your sensation of “red” is, we can never prove against this possibility. What remains important for language is our concurrence about distinct experiences no matter how we perceive them individually)

The idea that animals, incapable of spoken communication, are only machines, has been largely abandoned. Looking at the implications of the belief, it would entail that we have no ethical duties towards any non-human creatures. A dog’s yelp would just be an automatic, ‘programmed’ response to pain. Most people immediately reject this notion. We believe that though animals don’t have the intelligence to use syntax in communication, they do have semantics. A dog can has understanding when you show it a leash and intentionality when it barks for a treat. A machine on the other hand, having the mechanized intelligence to form syntactic phrases void of semantics, is just the opposite. Human beings are the most unique case- with the benefits of both syntax and semantics.

The technology of today may make the Turing Test seem quite unsubstantiated, for in this age we have the capacity to create machines that mimic even the most complicated of human behaviors and expressions. Nonetheless it will never be enough. Even if a ‘zombie twin’ was to be artificially created- it cannot be deemed conscious.

The foundation of awareness and understanding- the exact source of what gives rise to qualia- is yet a mystery, and perhaps someday the problem of consciousness can indeed be solved by some extraordinary ‘new physics’, or what Chalmers calls “a very different sort of explanation, requiring some radical changes in the way we think about the structure of the world”.

~can you hold the belief all we areour character, loves, fears, convictions, aspirations, and sense of identity– is in fact just part of a grand biological program- merely a product of many complex chemical reactions occurring in out brain..?~

…Or does the presence of this mysterious human consciousness entail we are something more; something greater, and unknown; The “Ghost in the Machine”

 
 

 

 
 
 
 
 
Advertisements