Arguing A.I.: The Battle for Twenty-First Century Science
Autor Sam Williamsen Limba Engleză Paperback – 31 dec 2001
In Arguing A.I., journalist Sam Williams charts both the history of artificial intelligence from its scientific and philosophical roots and the history of the A.I. debate. He examines how and why the tenor of the debate has changed over the last half-decade in particular, as scientists are struggling to take into account the latest breakthroughs in computer science, information technology, and human biology. For every voice predicting machines like 2001’s HAL within the next twenty to thirty years, others have emerged with more pessimistic forecasts. From artificial intelligence’s pioneers John McCarthy and Marvin Minsky, to futurist authors Ray Kurzweil and Hans Moravec, to software architects Bill Joy and Jaron Lanier, Arguing A.I. introduces readers to the people participating in the current debate, both proponents and critics of A.I. who are changing the way computers “think” and the way we think about computers.
Ultimately, Arguing A.I. is as much a history of thought as it is a history of science. Williams notes that many of the questions plaguing modern scientists and software programmers are the same questions that have concerned scientists and philosophers since time immemorial: What are the fundamental limitations of science and scientific inquiry? What is the nature of intelligence? And, most important, what does it really mean to be human?
Preț: 79.71 lei
Nou
Puncte Express: 120
Preț estimativ în valută:
15.25€ • 16.04$ • 12.71£
15.25€ • 16.04$ • 12.71£
Carte disponibilă
Livrare economică 14-28 decembrie
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9780812991802
ISBN-10: 081299180X
Pagini: 128
Dimensiuni: 130 x 206 x 10 mm
Greutate: 0.14 kg
Editura: Random House Crown
ISBN-10: 081299180X
Pagini: 128
Dimensiuni: 130 x 206 x 10 mm
Greutate: 0.14 kg
Editura: Random House Crown
Notă biografică
Sam Williams is a freelance writer whose commentaries on software and software culture have appeared in Upside Today (www.upside.com) and on BeOpen.com. He also writes for numerous magazines and newspapers. He lives in Brooklyn, New York, with his wife, Tracy.
Extras
Chapter 1
The Inspiration: Hilbert and Turing
At the height of the Second International Congress of Mathematicians in Paris in August 1900, German mathematician David Hilbert offered a poetic introduction to what would later be known as his “Twenty-three Problems” lecture, a milestone speech many mathematical historians credit for laying the foundation of twentieth-century mathematics [http://aleph0.clarku.edu/~djoyce/hilbert/problems.html]. “Who among us would not be glad to lift the veil behind which the future lies hidden, to cast a glance at the next advances of our science and the secrets of its development in future centuries?” he asked.
A hundred years later, Hilbert’s words offer a poetic introduction to the history of artificial intelligence as well. Artificial intelligence is, after all, a science inextricably linked to the future. Read any book on A.I. and it’s easy to detect a similar desire to bear witness to the future. The desire to “lift the veil” separating today’s earnest investigation from tomorrow’s common knowledge is as strong for A.I. researchers as it was for Hilbert and his colleagues a century ago.
The similarity is a familial one. Although the science of artificial intelligence as we now know it didn’t emerge until a full decade after Hilbert’s death in 1943, many of the theories that gave rise to that science descend directly from ideas posed by Hilbert at that fateful Paris lecture. The same goes for the spirit of artificial intelligence. Conceived in the collaborative science projects of World War II and nurtured in the postwar era of big science, A.I., too, draws its heritage from the post-Paris “program” created by Hilbert and his disciples at Germany’s Göttingen University in the decades prior to the Nazi seizure of power.
“Hilbert was a giant among mathematicians,” writes mathematical historian Mary Tiles in Mathematics and the Image of Reason. “It is hard to overestimate his influence over the character of twentieth century mathematics; so many of the great names in mathematics worked under him or worked with him.”
Hilbert was born in 1862 and raised in the East Prussian city of Königsberg. Now the Russian city of Kaliningrad, Königsberg in the nineteenth century was best known as the home of the eighteenth-century Prussian philosopher Immanuel Kant. Growing up in Kant’s prodigious intellectual wake, Hilbert developed an early affinity for numbers and logic that would prompt him to pursue a career in mathematics, much to the consternation of his father, Otto Hilbert, a Prussian judge.
Like Kant, Hilbert saw mathematics as the vehicle through which the human mind displayed its ultimate capacity for reason. Both men echoed the sentiments of Plato, who, according to legend, had the statement “Let no man ignorant of geometry enter here” inscribed over the doorway of his Athenian Academy as a testament to the relationship between mathematics and critical thinking.
Kant argued that the mathematical discipline of geometry offered evidence of the mind’s innate, or a priori, reasoning abilities. Science, Kant said, bases its discoveries on empirical observation, but geometry, which rests atop the abstract postulates first outlined in Euclid’s Elements, generates notions of space and time that anticipate empirical observation. According to Kant, this anticipation was more than just a coincidence. It was an indicator of the human mind’s ability to give shape to the universe even before it was observed. “There can be no doubt that all our knowledge begins with experience,” wrote Kant in the introduction to his masterwork, Critique of Pure Reason [www.arts.cuhk.edu.hk/Philosophy/Kant/cpr/]. “But though all our knowledge begins with experience it does not follow that it all arises out of experience.” Such ideas flew in the face of the empiricist school of philosophy, a branch led by British intellectuals such as David Hume, George Berkeley, and John Locke. Together, these men saw the mind as little more than a blank slate, a device much like a loom or a refractive lens that requires tangible input in order to generate meaningful output. “All our ideas or more feeble perceptions are copies of our impressions,” wrote Hume in his 1758 book Enquiry Concerning Human Understanding [www.utm.edu/research/hume/wri/lenq/lenq.htm], adding later that “the unexperienced reasoner is no reasoner at all.”
Kant’s eighteenth-century views took a beating throughout the nineteenth century as mathematicians earnestly probed the structural weaknesses of Euclidean geometry. While studying at the University of Königsberg, Hilbert delivered a defense of Kant’s “synthetic” a priori arguments in the realm of arithmetical judgment,1 and in 1899 he published Grundlangen der Geometrie (Foundations of Geometry), a virtuoso work that fused the mathematics of Euclidean and non-Euclidean geometry into a single, theoretically sound structure. As a personal tribute to his countryman, Hilbert used a Kant quotation for the book’s epigraph: “All human knowledge begins with intuitions, then passes to concepts and ends with ideas.”
In the course of defending Kant, Hilbert experienced a profound epiphany: Arguments over Euclidean postulates were really just arguments over symbolic relationships. Infinite planes, parallel lines, and right angles were little more than window dressing, elegant, man-made devices designed to make the underlying concepts more appealing to the human eye. “One must be able to say at all times-instead of points, straight lines, and planes-tables, chairs, and beer mugs,” quipped Hilbert to an academic colleague, summing up his abstract approach to both geometry and mathematics as a whole.
By the time of his Paris speech, Hilbert, then thirty-eight, had built up a sizable reputation as a mathematical reformer. Following Foundations, he was soon looking for new challenges. His intuition told him that the rest of mathematics, particularly arithmetic, geometry’s logical cousin, was ripe for a similar overhaul. If mathematicians could prove it to be both consistent and complete-that is, none of its foundational axioms contradicted one another or left room for loopholes-Kant and Plato’s vision of mathematics as man’s innate link to the infinite might be that much closer to proof.
Sensing that such a project was beyond the capacity of a single mathematician, Hilbert set out to make it a collaborative crusade. He spent the spring and most of the summer of 1900 preparing a speech for the Paris Congress that would rally other mathematicians to the reformist cause.
During that speech, Hilbert gave voice to his philosophical vision. Citing what he called the “axiom of provability,” Hilbert insisted that all mathematicians work with the conviction that problems exist to be solved. As a devoté of truth, Hilbert objected to the so-called revolt from reason led by nineteenth-century philosophers such as Friedrich Nietzsche and even scientists such as Sigmund Freud. In Hilbert’s view, mathematics represented the last bastion of rational thought in a world increasingly given over to irrationality, instinct, and subjective interpretation. To drive this conflict home, Hilbert alluded to the old Latin proverb Ignoramus et ignorabimus (“Ignorant we are, and ignorant we shall remain”) during his speech. It was a phrase that had enjoyed popularity among nineteenth-century Romantic thinkers, and Hilbert wished to hold it up for public ridicule. “In mathematics there is no ignorabimus,” he told his colleagues. “We hear within us the perpetual call: There is the problem. Seek its solution. You can find it by pure reason.”
Following the philosophical introduction, Hilbert laid out ten of the twenty-three major problems he considered most important to the future development of mathematics. Some of the problems were specific: a solution to Fermat’s last theorem. Some were general: the restructuring of physics according to mathematical axioms, for example. By the speech’s end, the assembled audience was more interested in arguing about the specific details of each problem than discussing the overarching anti-ignorabimus philosophy espoused by Hilbert.
Within a decade, however, Hilbert’s twenty-three problems had become an agenda of sorts. As the head of the Göttingen mathematics department, Hilbert used his power to build what became known as the “Göttingen program,” a collective effort to tackle the twenty-three problems on the Paris list. Chief among them was Problem No. 2, dubbed the Entscheidungsproblem by Hilbert’s German students, an attempt to prove the completeness and consistency of arithmetic.
Just as the encyclopedic tendencies of Enlightenment thinkers would trigger the Romantic backlash in the nineteenth century, so too would the philosophies of Hilbert and his peers trigger a similar backlash in the twentieth. The backlash came in the form of three proofs, all of which would have a major impact on the future science of artificial intelligence. In 1932, Czech mathematician Kurt Gödel published a paper that counteracted the Hilbert quest for a proof of arithmetic completeness. Any formal system large enough to include the logic of arithmetic, Gödel argued, was large enough to include “true” statements unprovable under the logical rules of that formal system.
Although the language was esoteric, the logic was inescapable. No matter how many axioms Hilbert and his students produced, there would always be room for logical loopholes, statements that said, in effect, “This statement is unprovable in this formal system.” To make this logic even more maddening to Hilbert and his colleagues, Gödel expressed his argument numerically, relying on the power of infinity to provide abundant room for the “Gödel numbers” that existed outside any finite mathematical system. In one fell swoop, Gödel had reinserted the verb ignorabimus into the vocabulary of modern mathematics.
If this one blow wasn’t enough, a two-blow combination in 1936 would finally do in the vaunted Hilbert program. That year, British mathematician Alan Turing and American logician Alonzo Church published simultaneous papers undermining the mathematical notion of “decidability,” an outgrowth of the Entscheidungsproblem. In order to prove completeness, Hilbert and his students first had to show that the mechanism of proof was itself free of holes. In investigating this side problem, Turing and Church found that there existed valid mathematical procedures, or algorithms, that could not be verified through mathematical logic.
Of the two papers, Turing’s took the more creative approach. Instead of viewing the question as an issue of logic,
Turing refashioned it as a matter of mechanical engineering. Machines are, by definition, mechanical. They execute invariable processes that are the embodiment of formal logic. In his 1936 paper “On Computable Numbers with an Application to the Entscheidungsproblem,” Turing outlined the structure of an imaginary machine that did nothing but add and subtract numbers. Dubbing his device a “logical computing machine,” Turing hypothesized that the device should be able to execute any finite arithmetic procedure by breaking it down into a series of logical steps [www.abelard.org/turpap2/turpap2.htm].
Although the design was imaginary-Turing pictured a tireless human clerk, or “computer,” writing and erasing numbers one step at a time on an infinitely long tape-this “finite state” method has since become the elemental model for modern computation. Analyze the performance of any modern “computer”-a term once used to refer to humans, not machines-and you will find the same linear series of step-by-step logical procedures first envisioned by Turing. For this reason, today’s computers are classified as “Turing machines.”
For Turing, the logical computing machine was a colorful way to approach a multitude of major mathematical issues. Not only was this step-by-step process enough to perform any procedure requiring mathematical logic, it was enough to emulate the behavior of any mechanical system, period. Even with this skeleton-key capability, Turing, like Gödel before him, proved the existence of “true” statements that could not be generated by the machine alone, no matter how long the input tape, no matter how many computational steps it took. Once again, there was ignorabimus in mathematics, at least as far as the machine was concerned.
The publication of Turing’s paper alongside Church’s more complex paper effectively closed the book on Hilbert’s Göttingen program. By 1936, the vaunted mathematics department had already fallen into decline. The Nazi Party’s rise to power had led to the purging of Jewish faculty, decimating the department’s staff and sending a demoralized Hilbert into retirement. Watching this decay from a distance, Hilbert refused to yield on his belief that mathematics was a discipline based on knowledge and proof, not incompleteness and uncertainty. Upon his death in 1943, the epitaph on his Göttingen tombstone sounded a final, optimistic note. Borrowing a line from one of his last speeches, it read, “Wir mussen wissen. Wir werden wissen.” In English: “We must know. We will know.”
Although few of Hilbert’s former colleagues were still around to receive the message-many had emigrated to America and were lending their talents to the Allied war effort-the philosophy would find a new home in the scientific fields to emerge during the postwar period. Turing’s paper, while closing the door on more than three decades’ worth of work, opened up an entirely new door by introducing the science of computational theory.
Turing himself would be one of the first to recognize the expansive reaches of this new science. During the wartime years, the Cambridge scholar worked as a code-breaking specialist for the British government, helping design a few of the first primitive computing devices. Dubbed “bombes,” these largely mechanical contraptions had no memory but still offered invaluable assistance in breaking down the large numbers associated with wartime decryption.
Across the Atlantic, American engineers and scientists were creating even more sophisticated devices. The first, dubbed ENIAC, short for Electronic Numerical Integrator and Computer, became operational just months after the war’s completion. With eighteen thousand vacuum tubes and a weight of thirty tons, ENIAC boasted less memory capacity (sixteen kilobytes) than most of today’s handheld video games.3 Nevertheless, it was impressive enough to attract the attention of Hungarian mathematician and former Hilbert student John von Neumann. As mathematical adviser to the U.S. Army, von Neumann used his political influence to secure ENIAC’s services in the ongoing Manhattan Project. In order to predict the outcome of several atomic-bomb tests, physicists at Los Alamos converted their calculations to punch-card form and shipped the cards off to the U.S. Army proving ground at Aberdeen, Maryland, ENIAC’s home. The process was cumbersome, but the results came back in a fraction of the time it would have taken a team of human computers to calculate them. Von Neumann and his Los Alamos colleagues were duly impressed.
As a mathematician, von Neumann saw the link between ENIAC and the prewar paper put out by Turing. During the machine’s design stage, he prodded military engineers John Mauchly and John Eckert to come up with a successor machine that would meet the theoretical specifications laid out by Turing in 1936. The result of this prodding was EDVAC, the first true Turing machine and the first computer boasting enough memory to tackle complex algorithmic procedures. Released in 1951, its design was foreshadowed by a von Neumann paper leaked to the academic community in 1945.
Von Neumann’s efforts paralleled similar work by Turing, who had kept abreast of postwar computing research via the Automatic Computing Engine, or ACE, a separate project in Great Britain. As early as 1944, Turing had discussed the prospects of “building a brain” mechanically. In 1947 Turing began examining ways to prove intelligent behavior in machines. That same year, he wrote a paper entitled “Intelligent Machinery,” which examined the possible counterarguments against machine intelligence. Three years later, motivated by the critiques of philosophers already questioning the possibility of machine intelligence, Turing explored the topic even further, penning “Computing Machinery and Intelligence” for the philosophical journal Mind [www.abelard.org/turpap/turpap.htm].
Like Hilbert’s Paris speech, Turing’s paper opened with a provocative question: “Can machines think?” Fearing that the word “think” was too loaded, Turing proposed a way around the word via a qualitative test. Dubbed the “Imitation Game” by Turing, this test has since been renamed the Turing test in his honor.
To the modern reader, the instructions for Turing’s Imitation Game read like a cross between the rules for a nineteenth-century parlor game and Jean-Paul Sartre’s existentialist play No Exit. Turing specifies the need for three human participants. The first participant is an interrogator of either gender. The other two participants, one male, one female, are the respondents. The interrogator must sit in a separate room and can communicate with the respondents only via a Teletype. With no access to vocal of visual cues, the interrogator must determine the identity of the person on the other end of the line via the text coming through the Teletype. Turing further complicates this task by giving each respondent permission to mimic the typing and conversational style of the other. With these rules in place, the interrogator must now determine which respondent is the male and which respondent is the female through pointed questioning.
To make the game even more interesting, Turing proposes an added twist: What if, after a few minutes of back-and-forth dialogue, somebody replaces one of the respondents with a computer capable of playing the game just as well as a human? Blind to the switch, the only way for the interrogator to determine that a machine has entered the game is to judge the intelligence level of the responses. Supposing the replies are intelligent enough, is it conceivable that the interrogator might fail to detect the switch? Such a challenge would be heavily weighted against the machine, Turing notes, but a computer victory under these circumstances would be hard to dismiss.
“May not machines carry out something which ought to be described as thinking but which is very different from what man does?” wonders Turing, playing the momentary role of a skeptical observer. “This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game successfully, we need not be troubled by the objection.”
The ultimate purpose of the Imitation Game, Turing writes, is to drive home the message that intelligence-like beauty-is in the eye of the beholder. Unless we are inside the machine seeing what it sees, thinking what it thinks, the only reliable test for intelligence is to measure its performance in situations that demand intelligent behavior.
“The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion,” Turing concludes. “Nevertheless, I believe at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Fifty-one years after their publication, Turing’s words seem prescient and naïve at the same time. Language and public opinion have certainly changed over the last half century. “User-friendly” computer interfaces and “personal” computers have reduced the emotional distance between human users and the monstrous machines of Turing’s day. In a few very limited cases it is even safe to say that computers have crossed the Turing threshold. Garry Kasparov’s comments [www.time.com/time/magazine/archive/1996/dom/960325/kasparov.html] in 1996 after watching Deep Blue
employ a cunning pawn sacrifice move-“I could feel-I could smell-a new kind of intelligence across the table”-seem to reinforce Turing’s closing assertions. For the most part, however, the notion of machines imitating humans in thought and behavior remains firmly fixed within the realm of science fiction. Few modern scientists equate savvy chess moves with general intelligence, just as few modern scientists equate the internal workings of a computer to the internal workings of a brain. At best, artificial-intelligence programs offer a tantalizing glimpse at the overall complexity of human intelligence; yet no matter how many times we “lift the veil” and expand that glimpse, another veil quickly appears.
Then again, the A.I. research community’s willingness to soldier on, even without a clear long-term vision, remains its greatest attribute. Like Hilbert, a mathematician whose epitaph explicitly rejects the notion that some forms of knowledge should remain permanently hidden, A.I. researchers have learned to meet adversity with optimism. Today’s A.I. researcher recognizes that for every closing door, a dozen new doors are opening up. The key to maintaining a robust discipline is guessing ahead of time which doors offer the most promising pathways beyond.
“History teaches the continuity of the development of science,” said Hilbert to his colleages in 1900. “We know that every age has its own problems, which the following age either solves or casts aside as profitless and replaces with new ones. If we would obtain an idea of the probable development of mathematical knowledge in the immediate future, we must let the unsettled questions pass before our minds and look over the problems which the science of today sets and whose solution we expect from the future. To such a review of problems the present day, lying at the meeting of the centuries, seems to me well adapted.”
The centuries have changed but the words still resonate. As we shall soon see, the A.I. research community, its allies, and its opponents are collectively deciding which problems to “cast aside as profitless” and which problems still demand solving.
The Inspiration: Hilbert and Turing
At the height of the Second International Congress of Mathematicians in Paris in August 1900, German mathematician David Hilbert offered a poetic introduction to what would later be known as his “Twenty-three Problems” lecture, a milestone speech many mathematical historians credit for laying the foundation of twentieth-century mathematics [http://aleph0.clarku.edu/~djoyce/hilbert/problems.html]. “Who among us would not be glad to lift the veil behind which the future lies hidden, to cast a glance at the next advances of our science and the secrets of its development in future centuries?” he asked.
A hundred years later, Hilbert’s words offer a poetic introduction to the history of artificial intelligence as well. Artificial intelligence is, after all, a science inextricably linked to the future. Read any book on A.I. and it’s easy to detect a similar desire to bear witness to the future. The desire to “lift the veil” separating today’s earnest investigation from tomorrow’s common knowledge is as strong for A.I. researchers as it was for Hilbert and his colleagues a century ago.
The similarity is a familial one. Although the science of artificial intelligence as we now know it didn’t emerge until a full decade after Hilbert’s death in 1943, many of the theories that gave rise to that science descend directly from ideas posed by Hilbert at that fateful Paris lecture. The same goes for the spirit of artificial intelligence. Conceived in the collaborative science projects of World War II and nurtured in the postwar era of big science, A.I., too, draws its heritage from the post-Paris “program” created by Hilbert and his disciples at Germany’s Göttingen University in the decades prior to the Nazi seizure of power.
“Hilbert was a giant among mathematicians,” writes mathematical historian Mary Tiles in Mathematics and the Image of Reason. “It is hard to overestimate his influence over the character of twentieth century mathematics; so many of the great names in mathematics worked under him or worked with him.”
Hilbert was born in 1862 and raised in the East Prussian city of Königsberg. Now the Russian city of Kaliningrad, Königsberg in the nineteenth century was best known as the home of the eighteenth-century Prussian philosopher Immanuel Kant. Growing up in Kant’s prodigious intellectual wake, Hilbert developed an early affinity for numbers and logic that would prompt him to pursue a career in mathematics, much to the consternation of his father, Otto Hilbert, a Prussian judge.
Like Kant, Hilbert saw mathematics as the vehicle through which the human mind displayed its ultimate capacity for reason. Both men echoed the sentiments of Plato, who, according to legend, had the statement “Let no man ignorant of geometry enter here” inscribed over the doorway of his Athenian Academy as a testament to the relationship between mathematics and critical thinking.
Kant argued that the mathematical discipline of geometry offered evidence of the mind’s innate, or a priori, reasoning abilities. Science, Kant said, bases its discoveries on empirical observation, but geometry, which rests atop the abstract postulates first outlined in Euclid’s Elements, generates notions of space and time that anticipate empirical observation. According to Kant, this anticipation was more than just a coincidence. It was an indicator of the human mind’s ability to give shape to the universe even before it was observed. “There can be no doubt that all our knowledge begins with experience,” wrote Kant in the introduction to his masterwork, Critique of Pure Reason [www.arts.cuhk.edu.hk/Philosophy/Kant/cpr/]. “But though all our knowledge begins with experience it does not follow that it all arises out of experience.” Such ideas flew in the face of the empiricist school of philosophy, a branch led by British intellectuals such as David Hume, George Berkeley, and John Locke. Together, these men saw the mind as little more than a blank slate, a device much like a loom or a refractive lens that requires tangible input in order to generate meaningful output. “All our ideas or more feeble perceptions are copies of our impressions,” wrote Hume in his 1758 book Enquiry Concerning Human Understanding [www.utm.edu/research/hume/wri/lenq/lenq.htm], adding later that “the unexperienced reasoner is no reasoner at all.”
Kant’s eighteenth-century views took a beating throughout the nineteenth century as mathematicians earnestly probed the structural weaknesses of Euclidean geometry. While studying at the University of Königsberg, Hilbert delivered a defense of Kant’s “synthetic” a priori arguments in the realm of arithmetical judgment,1 and in 1899 he published Grundlangen der Geometrie (Foundations of Geometry), a virtuoso work that fused the mathematics of Euclidean and non-Euclidean geometry into a single, theoretically sound structure. As a personal tribute to his countryman, Hilbert used a Kant quotation for the book’s epigraph: “All human knowledge begins with intuitions, then passes to concepts and ends with ideas.”
In the course of defending Kant, Hilbert experienced a profound epiphany: Arguments over Euclidean postulates were really just arguments over symbolic relationships. Infinite planes, parallel lines, and right angles were little more than window dressing, elegant, man-made devices designed to make the underlying concepts more appealing to the human eye. “One must be able to say at all times-instead of points, straight lines, and planes-tables, chairs, and beer mugs,” quipped Hilbert to an academic colleague, summing up his abstract approach to both geometry and mathematics as a whole.
By the time of his Paris speech, Hilbert, then thirty-eight, had built up a sizable reputation as a mathematical reformer. Following Foundations, he was soon looking for new challenges. His intuition told him that the rest of mathematics, particularly arithmetic, geometry’s logical cousin, was ripe for a similar overhaul. If mathematicians could prove it to be both consistent and complete-that is, none of its foundational axioms contradicted one another or left room for loopholes-Kant and Plato’s vision of mathematics as man’s innate link to the infinite might be that much closer to proof.
Sensing that such a project was beyond the capacity of a single mathematician, Hilbert set out to make it a collaborative crusade. He spent the spring and most of the summer of 1900 preparing a speech for the Paris Congress that would rally other mathematicians to the reformist cause.
During that speech, Hilbert gave voice to his philosophical vision. Citing what he called the “axiom of provability,” Hilbert insisted that all mathematicians work with the conviction that problems exist to be solved. As a devoté of truth, Hilbert objected to the so-called revolt from reason led by nineteenth-century philosophers such as Friedrich Nietzsche and even scientists such as Sigmund Freud. In Hilbert’s view, mathematics represented the last bastion of rational thought in a world increasingly given over to irrationality, instinct, and subjective interpretation. To drive this conflict home, Hilbert alluded to the old Latin proverb Ignoramus et ignorabimus (“Ignorant we are, and ignorant we shall remain”) during his speech. It was a phrase that had enjoyed popularity among nineteenth-century Romantic thinkers, and Hilbert wished to hold it up for public ridicule. “In mathematics there is no ignorabimus,” he told his colleagues. “We hear within us the perpetual call: There is the problem. Seek its solution. You can find it by pure reason.”
Following the philosophical introduction, Hilbert laid out ten of the twenty-three major problems he considered most important to the future development of mathematics. Some of the problems were specific: a solution to Fermat’s last theorem. Some were general: the restructuring of physics according to mathematical axioms, for example. By the speech’s end, the assembled audience was more interested in arguing about the specific details of each problem than discussing the overarching anti-ignorabimus philosophy espoused by Hilbert.
Within a decade, however, Hilbert’s twenty-three problems had become an agenda of sorts. As the head of the Göttingen mathematics department, Hilbert used his power to build what became known as the “Göttingen program,” a collective effort to tackle the twenty-three problems on the Paris list. Chief among them was Problem No. 2, dubbed the Entscheidungsproblem by Hilbert’s German students, an attempt to prove the completeness and consistency of arithmetic.
Just as the encyclopedic tendencies of Enlightenment thinkers would trigger the Romantic backlash in the nineteenth century, so too would the philosophies of Hilbert and his peers trigger a similar backlash in the twentieth. The backlash came in the form of three proofs, all of which would have a major impact on the future science of artificial intelligence. In 1932, Czech mathematician Kurt Gödel published a paper that counteracted the Hilbert quest for a proof of arithmetic completeness. Any formal system large enough to include the logic of arithmetic, Gödel argued, was large enough to include “true” statements unprovable under the logical rules of that formal system.
Although the language was esoteric, the logic was inescapable. No matter how many axioms Hilbert and his students produced, there would always be room for logical loopholes, statements that said, in effect, “This statement is unprovable in this formal system.” To make this logic even more maddening to Hilbert and his colleagues, Gödel expressed his argument numerically, relying on the power of infinity to provide abundant room for the “Gödel numbers” that existed outside any finite mathematical system. In one fell swoop, Gödel had reinserted the verb ignorabimus into the vocabulary of modern mathematics.
If this one blow wasn’t enough, a two-blow combination in 1936 would finally do in the vaunted Hilbert program. That year, British mathematician Alan Turing and American logician Alonzo Church published simultaneous papers undermining the mathematical notion of “decidability,” an outgrowth of the Entscheidungsproblem. In order to prove completeness, Hilbert and his students first had to show that the mechanism of proof was itself free of holes. In investigating this side problem, Turing and Church found that there existed valid mathematical procedures, or algorithms, that could not be verified through mathematical logic.
Of the two papers, Turing’s took the more creative approach. Instead of viewing the question as an issue of logic,
Turing refashioned it as a matter of mechanical engineering. Machines are, by definition, mechanical. They execute invariable processes that are the embodiment of formal logic. In his 1936 paper “On Computable Numbers with an Application to the Entscheidungsproblem,” Turing outlined the structure of an imaginary machine that did nothing but add and subtract numbers. Dubbing his device a “logical computing machine,” Turing hypothesized that the device should be able to execute any finite arithmetic procedure by breaking it down into a series of logical steps [www.abelard.org/turpap2/turpap2.htm].
Although the design was imaginary-Turing pictured a tireless human clerk, or “computer,” writing and erasing numbers one step at a time on an infinitely long tape-this “finite state” method has since become the elemental model for modern computation. Analyze the performance of any modern “computer”-a term once used to refer to humans, not machines-and you will find the same linear series of step-by-step logical procedures first envisioned by Turing. For this reason, today’s computers are classified as “Turing machines.”
For Turing, the logical computing machine was a colorful way to approach a multitude of major mathematical issues. Not only was this step-by-step process enough to perform any procedure requiring mathematical logic, it was enough to emulate the behavior of any mechanical system, period. Even with this skeleton-key capability, Turing, like Gödel before him, proved the existence of “true” statements that could not be generated by the machine alone, no matter how long the input tape, no matter how many computational steps it took. Once again, there was ignorabimus in mathematics, at least as far as the machine was concerned.
The publication of Turing’s paper alongside Church’s more complex paper effectively closed the book on Hilbert’s Göttingen program. By 1936, the vaunted mathematics department had already fallen into decline. The Nazi Party’s rise to power had led to the purging of Jewish faculty, decimating the department’s staff and sending a demoralized Hilbert into retirement. Watching this decay from a distance, Hilbert refused to yield on his belief that mathematics was a discipline based on knowledge and proof, not incompleteness and uncertainty. Upon his death in 1943, the epitaph on his Göttingen tombstone sounded a final, optimistic note. Borrowing a line from one of his last speeches, it read, “Wir mussen wissen. Wir werden wissen.” In English: “We must know. We will know.”
Although few of Hilbert’s former colleagues were still around to receive the message-many had emigrated to America and were lending their talents to the Allied war effort-the philosophy would find a new home in the scientific fields to emerge during the postwar period. Turing’s paper, while closing the door on more than three decades’ worth of work, opened up an entirely new door by introducing the science of computational theory.
Turing himself would be one of the first to recognize the expansive reaches of this new science. During the wartime years, the Cambridge scholar worked as a code-breaking specialist for the British government, helping design a few of the first primitive computing devices. Dubbed “bombes,” these largely mechanical contraptions had no memory but still offered invaluable assistance in breaking down the large numbers associated with wartime decryption.
Across the Atlantic, American engineers and scientists were creating even more sophisticated devices. The first, dubbed ENIAC, short for Electronic Numerical Integrator and Computer, became operational just months after the war’s completion. With eighteen thousand vacuum tubes and a weight of thirty tons, ENIAC boasted less memory capacity (sixteen kilobytes) than most of today’s handheld video games.3 Nevertheless, it was impressive enough to attract the attention of Hungarian mathematician and former Hilbert student John von Neumann. As mathematical adviser to the U.S. Army, von Neumann used his political influence to secure ENIAC’s services in the ongoing Manhattan Project. In order to predict the outcome of several atomic-bomb tests, physicists at Los Alamos converted their calculations to punch-card form and shipped the cards off to the U.S. Army proving ground at Aberdeen, Maryland, ENIAC’s home. The process was cumbersome, but the results came back in a fraction of the time it would have taken a team of human computers to calculate them. Von Neumann and his Los Alamos colleagues were duly impressed.
As a mathematician, von Neumann saw the link between ENIAC and the prewar paper put out by Turing. During the machine’s design stage, he prodded military engineers John Mauchly and John Eckert to come up with a successor machine that would meet the theoretical specifications laid out by Turing in 1936. The result of this prodding was EDVAC, the first true Turing machine and the first computer boasting enough memory to tackle complex algorithmic procedures. Released in 1951, its design was foreshadowed by a von Neumann paper leaked to the academic community in 1945.
Von Neumann’s efforts paralleled similar work by Turing, who had kept abreast of postwar computing research via the Automatic Computing Engine, or ACE, a separate project in Great Britain. As early as 1944, Turing had discussed the prospects of “building a brain” mechanically. In 1947 Turing began examining ways to prove intelligent behavior in machines. That same year, he wrote a paper entitled “Intelligent Machinery,” which examined the possible counterarguments against machine intelligence. Three years later, motivated by the critiques of philosophers already questioning the possibility of machine intelligence, Turing explored the topic even further, penning “Computing Machinery and Intelligence” for the philosophical journal Mind [www.abelard.org/turpap/turpap.htm].
Like Hilbert’s Paris speech, Turing’s paper opened with a provocative question: “Can machines think?” Fearing that the word “think” was too loaded, Turing proposed a way around the word via a qualitative test. Dubbed the “Imitation Game” by Turing, this test has since been renamed the Turing test in his honor.
To the modern reader, the instructions for Turing’s Imitation Game read like a cross between the rules for a nineteenth-century parlor game and Jean-Paul Sartre’s existentialist play No Exit. Turing specifies the need for three human participants. The first participant is an interrogator of either gender. The other two participants, one male, one female, are the respondents. The interrogator must sit in a separate room and can communicate with the respondents only via a Teletype. With no access to vocal of visual cues, the interrogator must determine the identity of the person on the other end of the line via the text coming through the Teletype. Turing further complicates this task by giving each respondent permission to mimic the typing and conversational style of the other. With these rules in place, the interrogator must now determine which respondent is the male and which respondent is the female through pointed questioning.
To make the game even more interesting, Turing proposes an added twist: What if, after a few minutes of back-and-forth dialogue, somebody replaces one of the respondents with a computer capable of playing the game just as well as a human? Blind to the switch, the only way for the interrogator to determine that a machine has entered the game is to judge the intelligence level of the responses. Supposing the replies are intelligent enough, is it conceivable that the interrogator might fail to detect the switch? Such a challenge would be heavily weighted against the machine, Turing notes, but a computer victory under these circumstances would be hard to dismiss.
“May not machines carry out something which ought to be described as thinking but which is very different from what man does?” wonders Turing, playing the momentary role of a skeptical observer. “This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game successfully, we need not be troubled by the objection.”
The ultimate purpose of the Imitation Game, Turing writes, is to drive home the message that intelligence-like beauty-is in the eye of the beholder. Unless we are inside the machine seeing what it sees, thinking what it thinks, the only reliable test for intelligence is to measure its performance in situations that demand intelligent behavior.
“The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion,” Turing concludes. “Nevertheless, I believe at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Fifty-one years after their publication, Turing’s words seem prescient and naïve at the same time. Language and public opinion have certainly changed over the last half century. “User-friendly” computer interfaces and “personal” computers have reduced the emotional distance between human users and the monstrous machines of Turing’s day. In a few very limited cases it is even safe to say that computers have crossed the Turing threshold. Garry Kasparov’s comments [www.time.com/time/magazine/archive/1996/dom/960325/kasparov.html] in 1996 after watching Deep Blue
employ a cunning pawn sacrifice move-“I could feel-I could smell-a new kind of intelligence across the table”-seem to reinforce Turing’s closing assertions. For the most part, however, the notion of machines imitating humans in thought and behavior remains firmly fixed within the realm of science fiction. Few modern scientists equate savvy chess moves with general intelligence, just as few modern scientists equate the internal workings of a computer to the internal workings of a brain. At best, artificial-intelligence programs offer a tantalizing glimpse at the overall complexity of human intelligence; yet no matter how many times we “lift the veil” and expand that glimpse, another veil quickly appears.
Then again, the A.I. research community’s willingness to soldier on, even without a clear long-term vision, remains its greatest attribute. Like Hilbert, a mathematician whose epitaph explicitly rejects the notion that some forms of knowledge should remain permanently hidden, A.I. researchers have learned to meet adversity with optimism. Today’s A.I. researcher recognizes that for every closing door, a dozen new doors are opening up. The key to maintaining a robust discipline is guessing ahead of time which doors offer the most promising pathways beyond.
“History teaches the continuity of the development of science,” said Hilbert to his colleages in 1900. “We know that every age has its own problems, which the following age either solves or casts aside as profitless and replaces with new ones. If we would obtain an idea of the probable development of mathematical knowledge in the immediate future, we must let the unsettled questions pass before our minds and look over the problems which the science of today sets and whose solution we expect from the future. To such a review of problems the present day, lying at the meeting of the centuries, seems to me well adapted.”
The centuries have changed but the words still resonate. As we shall soon see, the A.I. research community, its allies, and its opponents are collectively deciding which problems to “cast aside as profitless” and which problems still demand solving.