Artificial intelligence, or AI, is a cross-disciplinary approach to understanding, modeling, and creating intelligence of various forms. It is a critical branch of cognitive science, and its influence is increasingly being felt in other areas, including the humanities. AI applications are transforming the way we interact with each other and with our environment, and work in artificially modeling intelligence is offering new insights into the human mind and revealing new forms mentality can take. This volume of original essays presents the state of the art in AI, surveying the foundations of the discipline, major theories of mental architecture, the principal areas of research, and extensions of AI such as artificial life. With a focus on theory rather than technical and applied issues, the volume will be valuable not only to people working in AI, but also to those in other disciplines wanting an authoritative and up-to-date introduction to the field.
Symbolic AI vs. neural nets. From its very inception AI was divided into two quite distinct research streams: symbolic AI and neural nets. Symbolic AI took the view that intelligence could be achieved by manipulating symbols within the computer according to rules. Neural nets, or connectionism as the cognitive scientists called it, instead attempted to create intelligent systems as networks of nodes each comprising a simplified model of a neuron.
AI officially started in 1956, launched by a small but now-famous summer conference at Dartmouth College, in Hanover, New Hampshire. (The fifty-year celebration of this conference, AI@50, was held in July 2006 at Dartmouth, with five of the original participants making it back. Some of what happened at this historic conference figures in the final section of this chapter.) Ten thinkers attended, including John McCarthy (who was working at Dartmouth in 1956), Claude Shannon, Marvin Minsky, Arthur Samuel, Trenchard Moore (apparently the youngest attendee, and the lone note-taker at the original ...
We must note, however, that calculators do not know what the answers to arithmetic problems are, nor do they know they are doing arithmetic.
At a very high level, one can distinguish between three different, not necessarily exhaustive, inferential strategies: analogical, domain-specific, and structural.
At the same time, there are known theoretical limits to machine learning, many of which mirror the limits on human learning.
For example, if the data are too noisy – if they are essentially random – then learning will be nearly impossible. Machine learning algorithms employ structural inference, and so if there are no patterns in the data, then there is nothing that can be inferred. Learning also requires some variation in the world, either between individuals, or between times, or between places. Machine learning algorithms cannot learn anything about a constant-valued feature, since there is nothing to learn: The constant feature is always the same. And although some situations are clearly easier for learning than ...more
“Algorithms are successful only when they are ‘tuned’ to their domain; there are no universal learning algorithms.”
These observations suggest the conclusion that machine learning is (again) not true learning at all, but rather fast, useful detection of various patterns in data.
Great overview of current state of AI. General concepts haven't changed much since I studied AI 20 years ago, but engineering progress has been swift and impressive. Skynet will not happen in my lifetime, but the next generation should beware.
A brilliant, scholarly yet very accessible insight into the history and development of AI from multiple perspectives. An excellent primer for those interested in a broad overview.