Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Other fields of psychology: AI · Computer · Consulting · Consumer · Engineering · Environmental · Forensic · Military · Sport · Transpersonal · Index


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution.

. This is a sub-article of Artificial intelligence (AI), focusing on the Philosophy of artificial intelligence.

As is often the case with a nascent science, Artificial Intelligence ('AI') has enough confusing questions at the fundamental, conceptual level to warrant philosophical as well as scientific work. Much of this work, of course, intersects with topics from the philosophy of mind, but there are also philosophical topics more particular to AI. For example:

  • What is intelligence? How would we recognize whether something inhuman had it? (Or something human, for that matter?)
  • What kind of material and organization is required? Is it even possible for a creature made of metal, for example, to have intelligence comparable to a human's?
  • Even if non-organic creatures had problem-solving capabilities like a human's, could it have consciousness and emotions?
  • Supposing that we could create robots with intelligence capable to ours, should we? What ethical stances should they take? What ethical stances should we take toward them?

Conditions for Intelligence[]

Attempts to construct an AI do not make much sense if we have no idea how to tell when we've succeeded. Does it count as "genuine" AI if it can beat a grandmaster at chess? If it can do that and compose a good fugue? And advise you on your love life?

The starting place for answering such questions goes back to Alan Turing and the Turing test. In summary, a sufficient condition for intelligence is the ability to converse with a human in such a way that the human is fooled into thinking the conversation is with another human. (In order to remove biases based on how the AI looks, the conversation is normally imagined to take place through a medium like modern-day instant messaging chats.)

This is a good starting place, but there are several problems with this account. For one thing, it's nothing close to a necessary condition; it seems for example that ET was intelligent even if it couldn't convince anyone of this fact due to language barriers and the like. But it's not even obviously a sufficient condition. Chatbots, for example, are learning more and more sophisticated algorithms for sounding intelligent without - all would agree - any actual understanding of the conversations.

The Very Possibility of AI[]

John Searle uses a similar point to argue that AI is impossible in his famous thought experiment, the Chinese room. Searle argues that syntax is not sufficient for semantics. Put more colloquially, he argues that mere symbol manipulation - no matter how complicated - can never be enough to provide genuine meaning or understanding. Most professional philosophers in the area believe that Searle failed to establish that AI is impossible, but there is disagreement about exactly what is wrong with his argument.

Ethical Issues of AI[]

There are many ethical problems associated with working to create ingelligent creatures.

  • AI rights: if an AI is comparable in intelligence to humans, then should it have comparable moral status?
  • Would it be wrong to engineer robot that want to perform tasks unpleasant to humans?
  • Would a technological singularity be a good result or a bad one? If bad, what safeguards can be put in place, and how effective could any such safeguards be?
  • Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature?
  • Under what preconditions could such a simulation be allowed to happen at all?

A major influence in the AI ethics dialogue was Isaac Asimov who fictitiously created Three Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation of free will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances.

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be dublicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.

Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Expectations of AI[]

AI methods are often employed in cognitive science research, which tries to model subsystems of human cognition. Historically, AI researchers aimed for the loftier goal of so-called strong AI—of simulating complete, human-like intelligence. This goal is epitomised by the fictional strong AI computer HAL 9000 in the film 2001: A Space Odyssey. This goal is unlikely to be met in the near future and is no longer the subject of most serious AI research. The label "AI" has something of a bad name due to the failure of these early expectations, and aggravation by various popular science writers and media personalities such as Professor Kevin Warwick whose work has raised the expectations of AI research far beyond its current capabilities. For this reason, many AI researchers say they work in cognitive science, informatics, statistical inference or information engineering. Recent research areas include Bayesian networks and artificial life.

The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, and today in some specialized areas where "expert systems" are routinely used to augment or to replace professional judgment in some areas of engineering and of medicine.

Even though a substantial amount of AI functionality exists in everyday software, some misinformed commentators on computer technology have tried to suggest that a good definition of AI would be "research that has not yet been commercialised". This happens because when AI gets incorporated into an operating system or application it becomes an understated feature.

See also[]

External links[]

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement