At the dawn of the computer age in the 1950s and 1960s, researchers in the emerging field of artificial intelligence (AI) confidently predicted a new wave of discoveries that would revolutionize technology and society. For example, Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work a man can do,” and Marvin Minsky wrote that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved” [Crevier1993]. Needless to say, in spite of advances in computer hardware vastly exceeding even the most optimistic predictions at the time, many of the exuberant goals of the early AI movement remain unfulfilled. Nonetheless, significant progress is being made on several fronts.
In a previous blog article, we mentioned the efforts of IBM researchers who are developing a question-answering machine, which they have named “Watson” (see Watson article). The goal of the Watson project is to develop a system so powerful and effective that it can compete with champions of the Jeopardy! show, a popular TV quiz program in North America. Their R&D efforts have even included bring previous Jeopardy! champions to IBM’s laboratory to test the system at various levels of development. Earlier this year, IBM and the Jeopardy! show producers announced that a televised match with human contestants (which will include previous Jeopardy! champions) is scheduled for late 2010.
Another interesting research project along this line is the Never-Ending Language Learning (NELL) system at Carnegie Mellon University in Pittsburgh, Pennsylvania [Lohr2010]. These researchers, supported by grants from the Defense Advanced Research Project Agency and Google, and utilizing some computing resources provided by Yahoo!, has been fine-tuning a computer system that is trying to master the semantics of English language by learning more like a human. In particular, NELL scans millions of Internet pages for text patterns, from which it distills facts (390,000 as of October 2010). These facts are grouped into semantic categories, such as “cities,” “universities,” “plants” and hundreds of others. Individual facts include things like “San Francisco is a city” and “sunflower is a plant.”
There are still many obstacles to this research. As a single example, consider the sentences, “The girl caught the butterfly with the spots” and “The girl caught the butterfly with the net.” Human readers immediately understand that “spots” in the first sentence refers to the butterfly, and that “net” in the second is a tool used by the girl, in part because we can “visualize” both activities. But computer programs have difficulties with such examples.
A related difficulty is the fact that many words have troublesome multiple meanings. For example, after NELL scanned material on baked goods, it had no trouble identifying pies, breads, cakes and cookies as belonging to that category. But after NELL concluded that “Internet cookies” were baked good as well, this resulted in an “avalanche of mistakes,” according to project leader Tom Mitchell. Additional details of this project are available at [Lohr2010], from which some of this note was excerpted.
Still, some futurists are confidently predicting that the time will come, probably within three decades or so, that advances in both hardware and software will finally result in machine intelligence that rivals human intelligence, not only in compute-intensive tasks such as mathematical symbolic computing and high-precision numerical computation, but even in everyday “human” tasks such as language translation, analysis and understanding of written and spoken material, and, yes, driving automobiles. After, that, hold on to your hats — within a few more years machine intelligence may exceed that of the entire human race [Kurzweil2000].
Along this line, astronomer Seth Shostak of the Search for Extraterrestrial Intelligence (SETI) project speculates that the emergence of machine intelligence may be inevitable once a civilization advances beyond a certain point of technology [McCormack2010]:
Once any society invents the technology that could put them in touch with the cosmos, they are at most only a few hundred years away from changing their own paradigm of sentience to artificial intelligence.
Shostak reasons that since machine intelligence would outlast and outperform its biological predecessors, it is likely that any extraterrestrial intelligence that we might detect will be machines. Thus perhaps our failure so far to detect extraterrestrial intelligence is partly because we have been searching for planets and other potential habits for biological life, rather than searching for signs of intelligence itself.
References:
- [Crevier1993] Daniel Crevier, AI: The Tumultuous Search for Artificial Intelligence, Basic Books, New York, 1993.
- [Kurzweil2000] Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Penguin Books, New York, 2000.
- [Lohr2010] Steve Lohr, “Aiming to Learn as We Do, a Machine Teaches Itself,” New York Times, 4 Oct 2010, available at Online article.
- [McCormack2010] Shaun McCormack, “Astronomer seeks ET Machines,” Astrobiology Magazine, 1 Oct 2010, available at Online article.
- [Thompson2010] Clive Thompson, “What Is I.B.M.’s Watson?”, New York Times, 14 Jun 2010, available at Online article.