What Happens When Machines Become Conscious?

Geoffrey Peters

 gpeters@sfu.ca

 

 Simon Fraser University

CMPT 310 – Artificial Intelligence

Instructor D. Cukierman

 January 26, 2005


 

 

Some leading techno-pundits like Ray Kurzweil believe that machines will become conscious within our lifetimes. In his book, The Age of Spiritual Machines, Kurzweil writes that computers will, “increasingly appear to have their own personalities, evidencing reactions that we can only label as emotions and articulating their own goals and purposes.” He goes even further to say that the computers will “appear to have their own free will”, and “have spiritual experiences” (Kurzweil 6). This is an astounding prediction, but one that is echoed by many of today’s artificial intelligence (AI) theorists. In this brief discussion, I will bring into focus some of the questions surrounding the topic of intelligent computers and consciousness.

Whether or not computers will be conscious or spiritual in the future, the fact remains that they are increasingly able to accomplish tasks which were thought only achievable by humans, such as playing chess, or even reading printed text aloud to assist the blind. Stanford computer scientist John McCarthy believes that the only reason computers cannot do some tasks as well as humans is that we do not have an actual understanding of how we solve the problems ourselves. He writes that, “whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently” (McCarthy 2004). This lack of understanding of underlying mechanisms is apparent in the situation when experts in a field cannot fully explain how they accomplish a complex task, such as playing the saxophone or swinging a golf club, which they may only fully understand on a subconscious level.

As brain researcher Fred Genesee (2000) writes, even the human learning process can be seen as a kind of “programming”:

We now think that the young brain is like a computer with incredibly sophisticated hardwiring, but no software. The software of the brain, like the software of desktop computers, harnesses the exceptional processing capacity of the brain in the service of specialized functions, like vision, smell, and language. All individuals have to acquire or develop their own software in order to harness the processing power of the brain with which they are born.

Alas, if only we could find a better way to teach the computer how to solve problems, rather than the current programming methods! Kurzweil suggests that in the future we will be able to use detailed, non-invasive scanning of the human brain to replicate a brain’s structure inside a computer (53). That would surely be one way to program a computer: by duplicating the software that is already inside a human brain.

Considering such a fanciful thought, one question would be to ask if such a computer would be conscious. A stream of argument that Robert E. Horn discusses in Mapping Great Debates: Can Computers Think? is that “consciousness arises from higher-order representational structures”. If a computer-simulated human brain had the same representational structures as a biological human brain, I would argue that the computer brain would be just as conscious as the human one.

But it really depends on the definition of “consciousness” that we are willing to accept. If we go back to the 17th century, we’ll find that John Locke stated that, “consciousness is the perception of what passes in a man’s own mind” (quoted in Horn, 1998). By that definition, I would almost be willing to say that some computers (with self-monitoring feedback loops) are already conscious. University of Arizona researcher John Pollock supports this claim, in saying that “a self-scanning robot would be a conscious robot” (paraphrased by Horn, 1998). But as Peter Swirski argues, we should be wary of becoming over-excited, and making “grandiose statements” about AI, as he calls them (2000).

Putting all the hype aside, I believe the field of artificial intelligence deserves a great deal of attention, and is worthy of much excitement. Because even if our machines do not achieve their own consciousness or spirituality, the efforts spent in developing them are not wasted. Rather, the computerized tools that we are creating have the potential, and already have begun, to enhance our own consciousnesses and spiritual lives.

List of Works Cited

Genesee, Fred. Brain Research: Implications for Second Language Learning. ERIC Digest. 2000.

 Horn, Robert E.  Mapping great debates : can computers think? Bainbridge Island, WA : MacroVU Press, 1998.

 Kurzweil, Ray. The Age of Spiritual Machines. New York: Viking Penguin, 1999.

McCarthy, John. What is Artificial Intelligence. 21 November 2004. 26 January 2005. http://www-formal.stanford.edu/jmc/whatisai/whatisai.html

 Swirski, Peter. A Case of Wishful Thinking - Excerpts from Between Literature and Science. McGill/Queen’s UP, 2000. <http://www.scienceboard.net/community/perspectives.64.html>

 

 

This work is Copyright 2005, Geoff Peters, Simon Fraser University.

Feel free to quote it, or refer to it in other documents, but please give appropriate credit.

Back to Geoff's Writings or Geoff's Homepage.