Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Other fields of psychology: AI · Computer · Consulting · Consumer · Engineering · Environmental · Forensic · Military · Sport · Transpersonal · Index


This is a sub-article of Artificial intelligence (AI), focusing on the development and History of artificial intelligence.

Prehistory of AI[]

Humans have always speculated about the nature of mind, thought, and language, and searched for discrete representations of their knowledge. Aristotle tried to formalize this speculation by means of syllogistic logic, which remains one of the key strategies of AI. The first is-a hierarchy was created in 260 by Porphyry of Tyros. Classical and medieval grammarians explored more subtle features of language that Aristotle shortchanged, and mathematician Bernard Bolzano made the first modern attempt to formalize semantics in 1837.

Early computer design was driven mainly by the complex mathematics needed to target weapons accurately, with analog feedback devices inspiring an ideal of cybernetics. The expression "artificial intelligence" was introduced as a 'digital' replacement for the analog 'cybernetics'.

Development of AI theory[]

Much of the (original) focus of artificial intelligence research draws from an experimental approach to psychology, and emphasizes what may be called linguistic intelligence (best exemplified in the Turing test).

Approaches to artificial intelligence that do not focus on linguistic intelligence include robotics and collective intelligence approaches, which focus on active manipulation of an environment, or consensus decision making, and draw from biology and political science when seeking models of how "intelligent" behavior is organized.

AI also draws from animal studies, in particular with insects, which are easier to emulate as robots (see artificial life), as well as animals with more complex cognition, including apes, who resemble humans in many ways but have less developed capacities for planning and cognition. Some researchers argue that animals, which are apparently simpler than humans, ought to be considerably easier to mimic. But satisfactory computational models for animal intelligence are not available.

Seminal papers advancing AI include A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), by Warren McCulloch and Walter Pitts, and On Computing Machinery and Intelligence (1950), by Alan Turing, and Man-Computer Symbiosis by J.C.R. Licklider. See cybernetics and Turing test for further discussion.

There were also early papers which denied the possibility of machine intelligence on logical or philosophical grounds such as Minds, Machines and Gödel (1961) by John Lucas [1].

With the development of practical techniques based on AI research, advocates of AI have argued that opponents of AI have repeatedly changed their position on tasks such as computer chess or speech recognition that were previously regarded as "intelligent" in order to deny the accomplishments of AI. Douglas Hofstadter, in Gödel, Escher, Bach, pointed out that this moving of the goalposts effectively defines "intelligence" as "whatever humans can do that machines cannot".

John von Neumann (quoted by E.T. Jaynes) anticipated this in 1948 by saying, in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.

In 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".

Experimental AI research[]

Artificial intelligence began as an experimental field in the 1950s with such pioneers as Allen Newell and Herbert Simon, who founded the first artificial intelligence laboratory at Carnegie Mellon University, and John McCarthy and Marvin Minsky, who founded the MIT AI Lab in 1959. They all attended the Dartmouth College summer AI conference in 1956, which was organized by McCarthy, Minsky, Nathan Rochester of IBM and Claude Shannon.

Historically, there are two broad styles of AI research - the "neats" and "scruffies". "Neat", classical or symbolic AI research, in general, involves symbolic manipulation of abstract concepts, and is the methodology used in most expert systems. Parallel to this are the "scruffy", or "connectionist", approaches, of which artificial neural networks are the best-known example, which try to "evolve" intelligence through building systems and then improving them through some automatic process rather than systematically designing something to complete the task. Both approaches appeared very early in AI history. Throughout the 1960s and 1970s scruffy approaches were pushed to the background, but interest was regained in the 1980s when the limitations of the "neat" approaches of the time became clearer. However, it has become clear that contemporary methods using both broad approaches have severe limitations.

Artificial intelligence research was very heavily funded in the 1980s by the Defense Advanced Research Projects Agency in the United States and by the fifth generation computer systems project in Japan. The failure of the work funded at the time to produce immediate results, despite the grandiose promises of some AI practitioners, led to correspondingly large cutbacks in funding by government agencies in the late 1980s, leading to a general downturn in activity in the field known as AI winter. Over the following decade, many AI researchers moved into related areas with more modest goals such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

Micro-World AI[]

The real world is full of distracting and obscuring detail: generally science progresses by focusing on artificially simple models of reality (in physics, frictionless planes and perfectly rigid bodies, for example). In 1970 Marvin Minsky and Seymour Papert, of the MIT AI Laboratory, proposed that AI research should likewise focus on developing programs capable of intelligent behaviour in artificially simple situations known as micro-worlds. Much research has focused on the so-called blocks world, which consists of coloured blocks of various shapes and sizes arrayed on a flat surface. Micro-World AI

Spinoffs[]

Whilst progress towards the ultimate goal of human-like intelligence has been slow, many spinoffs have come in the process. Notable examples include the languages LISP and Prolog, which were invented for AI research but are now used for non-AI tasks. Hacker culture first sprang from AI laboratories, in particular the MIT AI Lab, home at various times to such luminaries as John McCarthy, Marvin Minsky, Seymour Papert (who developed Logo there) and Terry Winograd (who abandoned AI after developing SHRDLU).

AI languages and programming styles[]

AI research has led to many advances in programming languages including the first list processing language by Allen Newell et. al., Lisp dialects, Planner, Actors, the Scientific Community Metaphor, production systems, and rule-based languages.

GOFAI TEST research is often done in programming languages such as Prolog or Lisp. Bayesian work often uses Matlab or Lush (a numerical dialect of Lisp). These languages include many specialist probabilistic libraries. Real-life and especially real-time systems are likely to use C++. AI programmers are often academics and emphasise rapid development and prototyping rather than bulletproof software engineering practices, hence the use of interpreted languages to empower rapid command-line testing and experimentation.

The most basic AI program is a single If-Then statement, such as "If A, then B." If you type an 'A' letter, the computer will show you a 'B' letter. Basically, you are teaching a computer to do a task. You input one thing, and the computer responds with something you told it to do or say. All programs have If-Then logic. A more complex example is if you type in "Hello.", and the computer responds "How are you today?" This response is not the computer's own thought, but rather a line you wrote into the program before. Whenever you type in "Hello.", the computer always responds "How are you today?". It seems as if the computer is alive and thinking to the casual observer, but actually it is an automated response. AI is often a long series of If-Then (or Cause and Effect) statements.

A randomizer can be added to this. The randomizer creates two or more response paths. For example, if you type "Hello", the computer may respond with "How are you today?" or "Nice weather" or "Would you like to play a game?" Three responses (or 'thens') are now possible instead of one. There is an equal chance that any one of the three responses will show. This is similar to a pull-cord talking doll that can respond with a number of sayings. A computer AI program can have thousands of responses to the same input. This makes it less predictable and closer to how a real person would respond, arguably because living people respond somewhat unpredictably. When thousands of input ("if") are written in (not just "Hello.") and thousands of responses ("then") are written into the AI program, then the computer can talk (or type) with most people, if those people know the If statement input lines to type.

Many games, like chess and strategy games, use action responses instead of typed responses, so that players can play against the computer. Robots with AI brains would use If-Then statements and randomizers to make decisions and speak. However, the input may be a sensed object in front of the robot instead of a "Hello." line, and the response may be to pick up the object instead of a response line.

Chronological History[]

Historical Antecedents[]

Greek myths of Hephaestus and Pygmalion incorporate the idea of intelligent robots. In the 5th century BC, Aristotle invented syllogistic logic, the first formal deductive reasoning system.

Ramon Llull, Spanish theologian, invented paper "machines" for discovering nonmathematical truths through combinattions of words from lists in the 13th century.

By the 15th century and 16th century, clocks, the first modern measuring machines, were first produced using lathes. Clockmakers extended their craft to creating mechanical animals and other novelties. Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life (1580).

Early in the 17th century, René Descartes proposed that bodies of animals are nothing more than complex machines. Many other 17th century thinkers offered variations and elaborations of Cartesian mechanism. Thomas Hobbes published Leviathan, containing a material and combinatorial theory of thinking. Blaise Pascal created the second mechanical and first digital calculating machine (1642). Gottfried Leibniz improved Pascal's machine, making the Stepped Reckoner to do multiplication and division (1673) and evisioned a universal calculus of reasoning (Alphabet of human thought) by which arguments could be decided mechanically.

The 18th century saw a profusion of mechanical toys, including the celebrated mechanical duck of Jacques de Vaucanson and Wolfgang von Kempelen's phony chess-playing automaton, The Turk (1769).

Mary Shelley published the story of Frankenstein; or the Modern Prometheus (1818).

19th and Early 20th Century[]

George Boole developed a binary algebra (Boolean algebra) representing (some) "laws of thought." Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.

In the first years of the 20th century Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. Russell, Ludwig Wittgenstein, and Rudolf Carnap lead philosophy into logical analysis of knowledge. Karel Čapek's play R.U.R. (Rossum's Universal Robots)) opens in London (1923). This is the first use of the word "robot" in English.

Mid 20th century and Early AI[]

Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks. Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics" in a 1943 paper. Wiener's popular book by that name published in 1948. Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities.

The man widely acknowledged as the father of computer science, Alan Turing, published "Computing Machinery and Intelligence" (1950) which introduced the Turing test as a way of operationalizing a test of intelligent behavior. Claude Shannon published a detailed analysis of chess playing as search (1950). Isaac Asimov published his Three Laws of Robotics (1950).

1956: John McCarthy coined the term "artificial intelligence" as the topic of the Dartmouth Conference, the first conference devoted to the subject.

Demonstration of the first running AI program, the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert Simon (Carnegie Institute of Technology, now Carnegie Mellon University).

1957: The General Problem Solver (GPS) demonstrated by Newell, Shaw and Simon.

1952-1962: Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a world champion. Samuel's machine learning programs were responsible for the high performance of the checkers player.

1958: John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language. Herb Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases. Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence.

Late 1950s and early 1960s: Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation.

1960's[]

1961: James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level.

1962: First industrial robot company, Unimation, founded.

1963: Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.

1964: Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. Bert Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.

1965: J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. Joseph Weizenbaum (MIT) built ELIZA (program), an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed.

1966: Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. First Machine Intelligence workshop at Edinburgh: the first of an influential annual series organized by Donald Michie and others. Negative report on machine translation kills much work in Natural language processing (NLP) for many years.

1967: Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play.

1968: Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating limits of simple neural nets.

1969: Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving. Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford.

1970s[]

1970: Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.

Early 70's: Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI.

1971: Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.

1972: Prolog programming language developed by Alain Colmerauer.

1973:

  • The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models.
  • The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontine support for AI research in all but two universities.

1974: Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated the power of rule-based systems for knowledge representation and inference in the domain of medical diagnosis and therapy. Sometimes called the first expert system. Earl Sacerdoti developed one of the first planning programs, ABSTRIPS, and developed techniques of hierarchical planning.

1975: Marvin Minsky published his widely-read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a referreed journal.

Mid 70's: Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in NLP. David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception.

1976: Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely-guided search for interesting conjectures). Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.

Late 70's: Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.

1978: Tom Mitchell, at Stanford, invented the concept of Version Spaces for describing the search space of a concept formation program. Herbert Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments.

1979: Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance.

1980s[]

1980s: Lisp machines developed and marketed. First expert system shells and commercial applications.

1980: Lee Erman, Rick Hayes-Roth, Victor Lesser and Raj Reddy published the first description of the blackboard model, as the framework for the HEARSAY-II speech understanding system. First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford.

1981: Danny Hillis designs the connection machine, a massively parallel architecture that brings new power to AI, and to computation in general. (Later founds Thinking Machines, Inc.)

1982: The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism.

1983: John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events.

Mid 80's: Neural Networks become widely used with the Backpropagation algorithm (first described by Paul Werbos in 1974).

1985: The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).

1987: Marvin Minsky publishes The Society of Mind, a theoretical description of the mind as a collection of cooperating agents.

1989: Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network), which grew into the system that drove a car coast-to-coast under computer control for all but about 50 of the 2850 miles.

1990s[]

1990s: Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. Rodney Brooks' MIT Cog project, with numerous collaborators, makes significant progress in building a humanoid robot.

Early 90's: TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.

1997: The Deep Blue chess program (IBM) beats the world chess champion, Garry Kasparov, in a widely followed match. First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators.

1998: Tim Berners-Lee published his Semantic Web Road map paper [2].

Late 90's: Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network.

2000 and beyond[]

2000: Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. The Nomad robot explores remote regions of Antarctica looking for meteorite samples.

2004: OWL Web Ontology Language W3C Recommendation (10 February 2004).


vi:Lịch sử ngành trí tuệ nhân tạo

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement