Still a Philosophic Quandary
IBM has a history of giving itself grand challenges when it comes to computers and artificial intelligence. The supercomputer Deep Blue was programmed to play chess and was able to beat the world champion, Garry Kasparov, in 1997. The supercomputer has since evolved beyond making moves and counter-moves, as evident by the success of IBM’s supercomputer Watson on the game show Jeopardy!, famous for offering clues laden with ambiguity, irony, double meaning, riddles, puns, and wit. Watson proved to be up to the challenge when it defeated the two most successful Jeopardy! winners of all time, Ken Jennings and Brad Rutter. Has Watson shown the world that artificial intelligence has become a thinking, rationalizing, living machine?
Ontologically, what does it mean for a being to be? Watson exists in our reality as a supercomputer, but does the artificial intelligence of the supercomputer enable it to exist as a sentient being? Descartes has become known for his statement, “I think, therefore I am.” It would be a contradiction if Descartes said, “I do not exist,” because he would have to exist in order to come to that conclusion and make the statement. Descartes uses epistemological certainty to answer the ontological question of what it means to be by establishing that the self is something that we can know exists. Does Watson’s computer programming equate to thinking or reasoning, and if so, is that all a supercomputer needs to be considered alive?
Watson has a 15-terabyte database to draw from when answering the questions on Jeopardy! But the process of answering is not simple and, contrary to what one might expect from artificial intelligence, not always correct. In an ironic way, the mistakes of the supercomputer are what give it a bit of humanity. Watson makes mistakes, just as the human contestants do. Mistakes occur because the “thinking” and “reasoning” of Watson results in a confidence scoring. If Watson’s confident that the answer it has come up with is correct, it will make the decision to buzz in. Additionally, Watson’s ability to buzz in is not infallible. Once the decision has been made to buzz in, Watson must physically depress the buzzer with a magnetic pulse sent to an electronic thumb device. Ken Jennings and Brad Rutter were in many instances able to buzz in faster than Watson.
That statement alone could be in reference to a human being named Watson rather than a supercomputer, revealing the blurry grey area that artificial intelligence inhabits between object and being. The complex algorithms used by Watson to determine answers and make a decision to buzz in could be equated to the complex synapse connections that form human thought processes. Many philosophers and scientists have posed the question of whether or not human thinking is nothing more than complex computing. Gilbert Ryle published a challenge to the philosophically established views of Descartes that the mind is separate from the body, referring to it as “the dogma of the Ghost in the Machine” (Ryle, p. 253). Ryle argues against this view of the separation of mind and body by outlining it as a categorical mistake, which can arise when one only recognizes the parts instead of the whole. If we apply Ryle’s argument to artificial intelligence, the lack of physical separation between body and mind means that supercomputers such as Watson could indeed be viewed as having a mind because it is not an ethereal object that cannot be recreated. The mind is connected to the physical body and therefore it is possible to be recreated with programming.
But is Watson’s “thinking” and decision making enough to be considered alive? Even if human thinking is only complex computing processes like Watson’s, more indicators are necessary to be considered sentient. For example, Watson’s decision making is not framed by an ethical or moral code. For artificial intelligence to make a decision in violation of an ethical or moral code, as when a human being makes a decision to act immorally, the artificial intelligence would pose a danger for the human race that Isaac Asimov outlined in his I, Robot series. Asimov’s three laws of robotics to guard against this and other dangers still being adhered to today by engineers could be viewed as restricting artificial intelligence from ever truly being alive and fully cognizant.
One of the other potential indicators of being is whether or not something possesses the ability to learn. Epistemologically, philosophers question how knowing comes about. In order to know one must be able to acquire knowledge. Some knowledge is acquired through experience and learning from those experiences. Watson does indeed possess the ability to learn. Over the course of hundreds of Jeopardy! practice games, Watson gathers new data by learning correct answers where it’s were wrong as well as how better to understand the clues being presented in a particular category for more accurate assessment and higher confidence levels in answering correctly.
Watson’s ability to process human language with its many nuances including puns, wit, wordplay, and the potential multiple meanings of a single word is another possible indicator of whether or not the supercomputer could be considered alive. In any case, Watson certainly represents a huge leap in the field of natural-language processing, which is the ability to understand and respond to every-day English. Jeopardy! questions cover the entire domain of human knowledge with every subject imaginable being a potential clue and the clues themselves are full of the everyday English booby traps that were once difficult for computers to process and understand, let alone respond to. However, Watson has learned over the last few years to recognize these myriad complexities of English and respond correctly. Even so, Watson does not possess a sense of humor to be able to respond with laughter to Trebek’s jokes or the ability to be disappointed when an answer is incorrect.
Still, the question remains whether this is truly sentient and worthy of consideration for ‘thinking entity’ or whether it is a form of algorithmic conditioned response. Behaviorists such as Skinner could make a legitimate claim that learning can take place without cognition. For Skinner, the environment shapes a person’s behavior, and the principles of contiguity and reinforcement are central to the behaviorism explanation of the learning process (Ormrod, 2012). As such, Watson could be seen as a form of classical and operant conditioning.
Under the behaviorist framework of learning, classical conditioning is a type of learning that develops out of response to certain stimuli that are not naturally occurring. Pavlov’s dogs were conditioned to believe they would receive meat when a bell rang and would start to salivate. The salivation would occur later when they heard the bell even if meat was not presented to them, demonstrating learning through classical conditioning. People are classically conditioned every day by stimuli surrounding them and children seem more susceptible to this fundamental process since they lack a formal knowledge base. One could argue that conditioning does not involve cognition since the learning seems to lack a conscious awareness such as in the case of Pavlov’s dogs. As such, Watson may simply be responding to different stimuli by algorithms making it a wonder of technology but still mindless of what it is doing.
Even if Watson does not possess enough attributes to be considered alive, it is so sophisticated that it may still be treated like a human. Daniel Dennett describes intentional systems in an article from 1978, using the chess playing supercomputers that have since been far outdated by Watson technologically but not beyond being an intentional system. Dennett outlines first how the best-chess playing computers of the time had become practically inaccessible to a prediction from either the design stance or the physical stance because the programming had become too complex. Rather, players could only hope to win by treating the machine like a regular human opponent and making the best moves or counter moves available.
Ken Jennings and Brad Rutter were placed in the same situation, treating Watson like a fellow human opponent with a big shiny screen and trying to buzz in before the others could. An assumption of rationality forms the base of the intentional system, prescribing to it the possession of certain information and supposing it to be directed by certain goals. Following his reasoning, Dennett states, “it is a small step to calling the information possessed the computer’s beliefs, its goals and subgoals its desires” (Dennett, p. 269). Watson’s confidence level that determines whether or not the decision is made to buzz in could in this sense be viewed as having varying levels of belief in the correctness it answers; the reasoning of whether that level of belief is sufficient enough to buzz in is made with a goal in mind of answering correctly and ultimately winning.
Watson’s artificial intelligence has not developed to the point of being a sentient being, but could be considered to be an autonomous intelligence. Its ability to learn, process natural language, respond, rationalize, and make decisions reveals the complexity of the programming that, in a sense, takes on a mind of its own.
Asimov, Isaac. I, Robot. Bantem Dell: New York, 1950.
Dennett, Daniel. “Intentional Systems.” 1978. Introduction to Philosophy: Classical and Contemporary Readings. Eds. John Perry, Michael Bratman, and John Martin Fischer. 5th ed. New York: Oxford, 2010. 267–279.
Gondik, David. “How Watson ‘sees’, ‘hears’, and ‘speaks’ to play Jeopardy!.” IBM Research. January 10, 2012. Retrieved on October 19, 2012 from http://ibmresearchnews.blogspot.com/2010/12/how-watson-sees-hears-and-speaks-to.html
Ormrod, Jeanne (2012). Human learning (6th ed.). Boston: Pearson
Ryle, Gilbert. “Descartes’s Myth.” 1949. Introduction to Philosophy: Classical and Contemporary Readings. Eds. John Perry, Michael Bratman, and John Martin Fischer. 5th ed. New York: Oxford, 2010. 251–258.