- Shopping Bag ( 0 items )
"...revised and significantly expanded, presenting the latest advances, featuring an unparalleled integration of history with state-of-the-art theory and practice."
Although most scientific disciplines, such as mathematics, physics, chemistry, and biology, are well defined, the field of artificial intelligence (AI) remains enigmatic. Indeed, Hofstadter (1985, p. 633) remarks, "The central problem of AI is the question: What is the letter 'a'? Donald Knuth, on hearing me make this claim once, appended, 'And what is the letter "i"?'-an amendment that I gladly accept." Despite nearly 50 years of research in the field, no widely accepted definition of artificial intelligence exists.
Artificial intelligence is sometimes defined as the manipulation of symbols for problem solving (e.g., Buchanan and Shortliffe, 1985, p. 3) or by tautological statements such as artificial intelligence is "the part of computer science concerned with developing intelligent computer programs" (Waterman, 1986, p. 10). Rich (1983, p. 1) offered, "Artificial intelligence (AI) is the study of how to make computers do things at which, at the moment, people are better." But this definition, if regarded statically, precludes the very existence of artificial intelligence. Once a computer program exceeds the capabilities of a human, the program is no longer in the domain of AI.
The majority of proffered definitions of artificial intelligence rely on comparisons to human behavior. Staugaard (1987, p. 23) attributes a definition to Marvin Minsky-"the science of making machines do things that would require intelligence if done by men"-and suggests that some define AI as the "mechanization, or duplication, of the human thought process." Charniak and McDermott (1985, p. 6) offer, "Artificial intelligence is the study of mental faculties through the use of computational models," while Schildt (1987, p. 11) claims, "An intelligent program is one that exhibits behavior similar to that of a human when confronted with a similar problem. It is not necessary that the program actually solve, or attempt to solve, the problem in the same way that a human would."
The question, "What is AI?" would become mere semantics if only the answers did not suggest or imply radically different avenues of research: "Some researchers simply want machines to do the various sorts of things that people call intelligent. Others hope to understand what enables people to do such things. Still other researchers want to simplify programming" (Minsky, 1991). Yet Minsky also laments, "Why can't we build, once and for all, machines that grow and improve themselves by learning from experience? Why can't we simply explain what we want, and then let our machines do experiments or read some books or go to school, the sorts of things that people do. Our machines today do no such things."
1.2 THE TURING TEST
Turing (1950) considered the question, "Can machines think?" Rather than define the terms machines or think, he proposed a test which begins with three people, a man (A), a woman (B), and an interrogator (C). The interrogator is to be separated from both A and B, say, in a closed room (Figure 1-1) but may ask questions of both A and B. The interrogator's objective is to determine which of A and B is the man and which is the woman. It is As objective to cause C to make an incorrect identification. Turing (1950) provided the following example of a question posed to the man:
"C: Will X [C's name for A] please tell me the length of his or her hair?"
"A: My hair is shingled, and the longest strands are about nine inches long."
Player A may be deceitful, if he so desires. In contrast, the object for B is to help the interrogator. Turing suggested that the probable best strategy for her is to give truthful answers. In order that the pitch of the voice or other external clues may not aid in C's decision, a teleprinter was to be used for communication between the rooms.
Turing then replaced the original question, "Can machines think?" with the following: "We now ask the question, 'What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" (Turing, 1950). This question separates the physical and intellectual capacities of humans. The form of interrogation prevents C from using sensory information regarding As or B's physical characteristics. Presumably, if the interrogator were able to show no increased ability to decide between A and B when the machine was playing as opposed to when the man was playing, then the machine would be declared to have passed the test. Whether or not the machine should then be judged capable of thinking was left unanswered. Turing (1950) in fact dismissed the original question as being "too meaningless to deserve discussion."
Turing (1950) limited the possible machines to be the set of all digital computers. He indicated through considerable analysis that these machines are universal, that is, all computable processes can be executed by such machines. With respect to the suitability of the test itself, Turing (1950) thought the game might be weighted "too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing." (Hofstadter, 1985, pp. 514-520, relates an amusing counterexample in which he was temporarily fooled in such a manner.)
Turing (1950) considered and rejected a number of objections to the plausibility of a "thinking machine," although somewhat remarkably he felt an argument supporting the existence of extrasensory perception in humans was the most compelling of all objections. The Lady Lovelace objection (Countess of Lovelace, 1842), referring to a memoir by the Countess of Lovelace on Babbage's Analytical Engine, is the most common of present refutations of a thinking machine. The argument asserts that a computer can only do what it is programmed to do and therefore will never be capable of generating anything new. Turing (1950) countered this argument by equating it with a statement that a machine can never take us by surprise. But he noted that machines often act in unexpected ways because the entire determining set of initial conditions of the machine is generally unknown; therefore, an accurate prediction of all possible behavior of the mechanism is impossible.
Moreover, he suggested that a thinking machine should be a learning machine, capable of altering its own configuration through a series of rewards and punishments. Thus it could modify its own programming and generate unexpected behavior. He speculated that "in about fifty years' time it will be possible to programme computers, with a storage capacity of about [10.sup.9] [bits], to make them play the imitation game so well that an average interrogator will not have more than a 70 per cent chance of making the right identification after five minutes of questioning" (Turing, 1950).
1.3 SIMULATION OF HUMAN EXPERTISE
The acceptance of the "Turing Test" focused attention on mimicking human behavior. At the time (1950), it was beyond any reasonable consideration that a computer could pass the Turing Test. Rather than focus on imitating human behavior in conversation, attention was turned to more limited domains of interest. Simple two-person games of strategy were selected. These games received attention for at least three reasons: (1) Their rules are static, known to both players, and easy to express in a computer program, (2) they are examples from a class of problems concerned with reasoning about actions, and (3) the ability of a game-playing computer can be measured against human experts.
The majority of research in game playing has been aimed at the development of heuristics that can be applied to two-person, zero-sum, nonrandom games of perfect information (Jackson, 1985). The term zero-sum indicates that any potential gain to one player will be reflected as a corresponding loss to the other player. The term nonrandom means that the allocation and positioning of resources in the game (e.g., pieces on a chess board) is purely deterministic. Perfect information indicates that both players have complete knowledge regarding the disposition of both players' resources (e.g., tic-tac-toe, not poker).
The general protocol was to examine an expert's decisions during a game so as to discover a consistent set of parameters or questions that are evaluated during his or her decision-making process. These conditions could then be formulated in an algorithm capable of generating behavior similar to that of the expert when faced with identical situations. It was believed that if a sufficient quantity or "coverage" of heuristics could be programmed into the computer, the sheer speed and infallible computational ability of the computer would enable it to match or even exceed the ability of the human expert.
1.3.1 Samuel's Checker Program
One of the earliest efforts along these lines was offered by Samuel (1959), who wrote a computer program that learned to play checkers. Checkers was chosen for several reasons: (1) There is no known algorithm that provides for a guaranteed win or draw, (2) the game is well defined with an obvious goal, (3) the rules are fixed and well known, (4) there are human experts who can be consulted and against which progress of the program can be tested, and (5) the activity is familiar to many people. The general procedure of the program was to look ahead a few moves at a time and evaluate the resulting board positions.
This evaluation was made with respect to several selected parameters. These parameters were then included in a linear polynomial with variable coefficients. The result of the polynomial indicated the worth of the prospective board under evaluation. The most critical and obvious parameter was the inability for one side or the other to move. This can occur only once in a game and was tested separately. Another clearly important consideration was the relative piece advantage. Kings were given 50 percent more weight than regular pieces. Samuel (1959) tried two alternative methods for including additional parameters. Initially, Samuel himself chose these terms, but he later allowed the program to make a subset selection from a large list of possible parameters.
To determine a move, the game tree of possible new boards was searched. A minimax procedure was used to discover the best move. The ply, or number of levels to be searched in the tree, was initially set at three unless the next move was a jump, the last move was a jump, or an exchange offer was possible. The analysis proceeded backward from the evaluated board position through the tree of possible moves, with the assumption that at each move the opponent would always attempt to minimize the machine's score while the machine would act to maximize its score. Under these conditions the search was continued until these circumstances were no longer encountered or until a maximum of 20 levels had been searched.
After initial experiments in which the selected polynomial had four terms (piece advantage, denial of occupancy, mobility, and a hybrid term that combined control of the center and piece advancement), the program was allowed to select a subset of 16 parameters from a list of 38 chosen parameters. Samuel allowed the computer to compete against itself; one version, Alpha, constantly modified the coefficients and parameters of its polynomial, and the other version, Beta, remained fixed (i.e., it was replaced by Alpha after a loss). A record of the correlation existing between the signs of the individual term contributions in the initial scoring polynomial and the sign of the change between the scores was maintained, along with the number of times that each particular term with a nonzero value was used. The coefficient for the polynomial term of Alpha with the then-largest correlation coefficient was set at a prescribed maximum value with proportionate values determined for all the remaining coefficients. Samuel noted some possible instabilities with this modification technique and developed heuristic solutions to overcome these problems. Term replacement was made when a particular parameter had the lowest correlation eight times. Upon reaching this arbitrary limit, it was placed at the bottom of the reserve list and the first parameter in the reserve list was inserted into the scoring polynomial.
After a series of 28 games, Samuel described the program as being a better-than-average player. "A detailed analysis of these results indicated that the learning procedure did work and that the rate of learning was surprisingly high, but that the learning was quite erratic and none too stable" (Samuel, 1959). Refinements were made to the method of altering the scoring polynomial to help prevent this problem.
In 1962, at the request of Edward Feigenbaum and Julian Feldman, Samuel arranged for a match between his program and Robert W. Nealy, a purported former Connecticut checkers champion. Samuel's program defeated Nealy, who commented (cited in Samuel, 1963):
Our game ... did have its points. Up to the 31st move, all of our play had been previously published, except where I evaded "the book" several times in a vain effort to throw the computer's timing off. At the 32-27 [a specific move] loser and onwards, all the play is original with us, so far as I have been able to find. It is very interesting to me to note that computer had to make several star moves in order to get the win, and that I had several opportunities to draw otherwise. That is why I kept the game going. The machine, therefore, played a perfect ending without one misstep. In the matter of the end game, I have not had such competition from any human being since 1954, when I lost my last game.
The moves of the game appear in Samuel (1963).
In retrospect, perhaps more acclaim was given to this result than was deserved. Schaeffer (1996, p. 94) indicated that Nealy was in fact not a former Connecticut state champion at the time of the match against Samuel's program, although he did earn that title in 1966, four years later. Moreover, Nealy did not enter the U.S. Championship checkers tournament, and thus the strength of his play at the national level was based more on opinion than on record. Schaeffer (1996, pp. 94-95) reviewed the sequence of moves from the Nealy match, and with the aid of Chinook (a current world champion artificial intelligence checkers program designed by Schaeffer and his colleagues), indicated that Nealy made several blunders during the game and that Samuel's checkers program also did not capitalize on possible opportunities. In sum, the glowing description that Nealy gave of Samuel's program's endgame is well accepted in the literature but is an overstatement of the program's ability. Schaeffer (1996, p. 97) also reported that, in 1966, Samuel's program was played against the two persons vying for the world championship. Four games were played against each opponent, with Samuel's program losing all eight matches.
1.3.2 Chess Programs
Researchers in artificial intelligence have also been concerned with developing chess programs. Initial considerations of making machines play chess date to Charles Babbage (1792-1871). Babbage had described the Analytical Engine, a theoretic mechanical device that was a digital computer, although not electronic. This machine was never built (but an earlier design, the Difference Engine, was in fact successfully constructed only as recently as 1991; Swade, 1993). Babbage recognized that, in principle, his Analytical Engine was capable of playing games such as checkers and chess by looking forward to possible alternative outcomes based on current potential moves.
Excerpted from Evolutionary Computation by David B. Fogel Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
|1||Defining artificial intelligence||1|
|3||Computer simulation of natural evolution||59|
|4||Theoretical and empirical properties of evolutionary computation||105|
Posted June 6, 2013