Philosophy and AI

The most incomprehensible thing about the universe is that it is comprehensible.
--Albert Einstein
This page contains some of my philosophical conclusions, along with their implications for artificial intelligence (AI), developed over many years. I am not a professional philosopher, only an amateur, but I have arrived at these thoughts after much study and careful consideration, and they are my opinion today.

Ontology

An ontology is a theory of existence. We can talk about things that exist and things that don't exist. For example, Rene Descartes proved he existed with his famous "Cogito, ergo sum" (I think, therefore I am). Fictional characters like Sherlock Holmes do not exist and never did, but we find it interesting to talk about them.

Ontologies generally fall into two categories, monistic and dualistic. In a monistic ontology, there is only one type of thing that exists. Spinoza, who influences me significantly, had a monistic ontology: everything that exists is a member of the class of "substance." In a dualistic ontology there are two types of things: in the example of Descartes' philosophy there is the famous mind-body dualism. There are also more complex ontologies having multiple exclusive categories of things. My ontology is monistic, shown below in the figure:


Schematic of my ontology.

The topmost ellipse in the figure represents the class (set) of things that exist. The two middle ellipses are subsets of that set, as indicated by the "is a" relationships between them and the set of substantive things. The lowest ellipse is the set of ideas1 that are constructed by minds or that describe physical things. Representations of ideas are either physical or mental. The ideas themselves don't have existence, although like the literary characters, we can talk about them as much as we please.

The physical things include space-time and mass-energy: all of the subjects of physics, all of space, matter, and energy. The mental things include thoughts, emotions, sensations, unconscious mental processes, and consciousness.2 The set of ideas includes form, mathematics, software, and what is now called intellectual property. The ideas in themselves do not exist. In order for us to know an idea it must have either a physical representation (like the text on a page) or a mental construct. Every person discovers a mathematical idea like the Pythagorean theorem anew for himself.3 In the process of understanding the idea, a person constructs a mental representation that he understands. It should be noted that my theory of the non-existence of ideas is contrary to Plato. It is also interesting to note that a monistic ontology implies a pantheistic theology.

Consciousness

Consciousness can be defined as what it's like to be aware. One theory holds that all multi-celled animals (worms, etc., on up) have consciousness,4 and I think it's true. Certainly, if you have ever fished with worms, you know they don't like being pierced with the hook, and will squirm like hell to avoid it. Most philosophers will concede that the higher animals like cats and dogs can feel pain, etc., and therefore have consciousness.

Consciousness arises in the brain and the chemical basis of consciousness is easily established by observing the effects of anesthetics (ether, alcohol, etc.) and other consciousness altering drugs. One theory of consciousness based on functional equivalence holds that because consciousness has a physical basis, then it can be explained entirely on the basis of physics. That is, consciousness is the physical activity of the brain. However, there are some distinct attributes of consciousness that do not seem easily explained on a physical basis. The most obvious of these is the lack of "location" of conscious experience. Physical events have locations in physical space while consciousness does not.

Thoughts are mental events that can be conscious or unconscious.5 Naturally, we are most familiar personally with conscious thought, but the mind includes extensive unconscious activity as Freud and others so rightly pointed out early in the 20th century. A conscious thought is some sort of mental structure "illuminated" by consciousness.

Some philosophers sometimes seem to confuse consciousness and intelligence, and sometimes, accordingly, prove that artificial machines can never be intelligent because they can never have consciousness. It's a fair question to ask if a machine, intelligent or not, can ever have consciousness. Naturally, for the purpose of this investigation, we say that people are not machines, but that a machine is something artificial that does not use biological style devices. If we assume that consciousness arises uniquely in animal brains (neurons, etc.), then by definition, a machine can never be conscious. However, the assumption that consciousness can happen only in animal brains is quite debatable.

Right now, nobody has the faintest idea how to build a machine that has consciousness, and nobody has the faintest idea how to test it (to prove it has consciousness) if one should be built. We leave this topic as a moot point for further research and proceed to the potentially more productive issue of artificial intelligence.

Artificial Intelligence

Before defining intelligence, I want to give my definition of an algorithm:

Definition: An algorithm is a finite set (often, but not limited to, a sequence) of actions on things ordered to achieve a desired result.

That is, echoing Turing, an algorithm is an effective procedure. Computation, in turn, is defined as the performance of algorithms. Therefore a computer is something that performs actions on things purposefully. That is, computation is action that has the appearance of intelligence. Algorithms can be executed in the physical world (sorting a hand of cards, for example) or in a purely symbolic world, such as in the human mind or in an artificial computer.

Applying this definition, anything that performs actions purposefully can be considered to be a computer (or to have a computer component). For example, the brain of a shark that cruises around looking for prey can be considered a natural computer.

Notice that I have described computation in terms of intelligence, which is the more difficult term to define. Intelligence is one of those things that everyone thinks he knows what it is, but is hard to pin down. Let's see if we can do a bit better.

First, let us note that intelligence is not computation (the performance of algorithms). If it were so, there would be no debate about computers being capable of intelligence. Intelligence is more than just effectively achieving goals. It includes deciding on the right goals.6

Definition: Intelligence is deciding on appropriate goals and then formulating and executing viable plans for achieving those goals.

In order to set appropriate goals, the intelligent entity must know what is good. The problem of deciding "what is good" is a philosophical issue. Therefore, we can say that any intelligent being must have some of the qualities of a philosopher. The question becomes one of the possibility of writing a computer program that does effective philosophy, and that requires defining philosophic activity. Therefore, if and only if philosophical activity can be effectively defined, can a truly intelligent artifice be built. This is an open problem which may have a negative answer.

If we can show that philosophy requires consciousness, then we can say that an AI must necessarily be conscious. Further, we can then show that any machine that is not conscious must not be intelligent.

The first question this line of reasoning suggests is "can a machine that doesn't know it is thinking be said to be thinking?" Such a machine would have a hard time understanding Descartes' famous cogito, ergo sum. How could we ever say that such a machine is a philosopher? This tends to reinforce the proposed assertion that a philosophical machine must be conscious, leading us back to the problem of proving that some machine, asserted to be conscious, is actually so. The Turing test is insufficient for this purpose.

Suppose that someday somebody builds a machine that

Such a machine would not have a problem with cogito, ergo sum, nor would it have reason to dispute my monistic (but partitioned) ontology (above). To be consistent with its claims of consciousness, it would have knowledge (via experience) of its own qualia, allowing it to assert that they are fundamental elements of reality, distinct from observable physical objects and events.

I welcome your comments by email to Richard dot J dot Wagner at gmail dot com


philo.html; This hand-crafted HTML page was created January 3, 2002, by Rick Wagner, and last updated June 13, 2012 by Rick Wagner.
Copyright © 2002-2012 by Rick Wagner, all rights reserved.