Google It All

Sunday, December 21, 2008

Artificial Intelligence: Realizing the Ultimate Promises of Computing

Bringing Common Sense, Expert Knowledge, and Superhuman Reasoning to Computers

Artificial Intelligence (AI) is the key technology in many of today's novel applications, ranging from banking systems that detect attempted credit card fraud, to telephone systems that understand speech, to software systems that notice when you're having problems and offer appropriate advice. These technologies would not exist today without the sustained federal support of fundamental AI research over the past three decades.

Although there are some fairly pure applications of AI -- such as industrial robots, or the IntellipathTM pathology diagnosis system recently approved by the American Medical Association and deployed in hundreds of hospitals worldwide -- for the most part, AI does not produce stand-alone systems, but instead adds knowledge and reasoning to existing applications, databases, and environments, to make them friendlier, smarter, and more sensitive to user behavior and changes in their environments. The AI portion of an application (e.g., a logical inference or learning module) is generally a large system, dependent on a substantial infrastructure. Industrial R&D, with its relatively short time-horizons, could not have justified work of the type and scale that has been required to build the foundation for the civilian and military successes that AI enjoys today. And beyond the myriad of currently deployed applications, ongoing efforts that draw upon these decades of federally-sponsored fundamental research point towards even more impressive future capabilities:

# Autonomous vehicles: A DARPA-funded onboard computer system from Carnegie Mellon University drove a van all but 52 of the 2849 miles from Washington, DC to San Diego, averaging 63 miles per hour day and night, rain or shine;

# Computer chess: Deep Blue, a chess computer built by IBM researchers, defeated world champion Gary Kasparov in a landmark performance;

# Mathematical theorem proving: A computer system at Argonne National Laboratories proved a long-standing mathematical conjecture about algebra using a method that would be considered creative if done by humans;

# Scientific classification: A NASA system learned to classify very faint signals as either stars or galaxies with superhuman accuracy, by studying examples classified by experts;

# Advanced user interfaces: PEGASUS is a spoken language interface connected to the American Airlines EAASY SABRE reservation system, which allows subscribers to obtain flight information and make flight reservations via a large, on-line, dynamic database, accessed through their personal computer over the telephone.

In a 1977 article, the late AI pioneer Allen Newell foresaw a time when the entire man-made world would be permeated by systems that cushioned us from dangers and increased our abilities: smart vehicles, roads, bridges, homes, offices, appliances, even clothes. Systems built around AI components will increasingly monitor financial transactions, predict physical phenomena and economic trends, control regional transportation systems, and plan military and industrial operations. Basic research on common sense reasoning, representing knowledge, perception, learning, and planning is advancing rapidly, and will lead to smarter versions of current applications and to entirely new applications. As computers become ever cheaper, smaller, and more powerful, AI capabilities will spread into nearly all industrial, governmental, and consumer applications.

Moreover, AI has a long history of producing valuable spin-off technologies. AI researchers tend to look very far ahead, crafting powerful tools to help achieve the daunting tasks of building intelligent systems. Laboratories whose focus was AI first conceived and demonstrated such well-known technologies as the mouse, time-sharing, high-level symbolic programming languages (Lisp, Prolog, Scheme), computer graphics, the graphical user interface (GUI), computer games, the laser printer, object-oriented programming, the personal computer, email, hypertext, symbolic mathematics systems (Macsyma, Mathematica, Maple, Derive), and, most recently, the software agents which are now popular on the World Wide Web. There is every reason to believe that AI will continue to produce such spin-off technologies.

Intellectually, AI depends on a broad intercourse with computing disciplines and with fields outside computer science, including logic, psychology, linguistics, philosophy, neuroscience, mechanical engineering, statistics, economics, and control theory, among others. This breadth has been necessitated by the grandness of the dual challenges facing AI: creating mechanical intelligence and understanding the information basis of its human counterpart. AI problems are extremely difficult, far more difficult than was imagined when the field was founded. However, as much as AI has borrowed from many fields, it has returned the favor: through its interdisciplinary relationships, AI functions as a channel of ideas between computing and other fields, ideas that have profoundly changed those fields. For example, basic notions of computation such as memory and computational complexity play a critical role in cognitive psychology, and AI theories of knowledge representation and search have reshaped portions of philosophy, linguistics, mechanical engineering and, control theory.

No comments: