Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Other fields of psychology: AI · Computer · Consulting · Consumer · Engineering · Environmental · Forensic · Military · Sport · Transpersonal · Index


. In the philosophy of artificial intelligence, strong AI is the claim that some forms of artificial intelligence can truly reason and solve problems; strong AI states that it is possible for machines to become sapient, or self-aware, but may or may not exhibit human-like thought processes. The term strong AI was originally coined by John Searle, who writes:

"according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind"[1]

The term "artificial intelligence" would equate to the same concept as what we call "strong AI" based on the literal meanings of "artificial" and "intelligence". However, initial research into artificial intelligence was focused on narrow fields such as pattern recognition and automated scheduling, in hopes that they would eventually allow for an understanding of true intelligence. The term "artificial intelligence" thus came to encompass these narrower fields ("weak AI") as well as the idea of strong AI.

Weak AI[]

In contrast to strong AI, weak AI refers to the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases, are completely outside of) the full range of human cognitive abilities. An example of weak AI software would be a chess program such as Deep Blue. Unlike strong AI, a weak AI does not achieve self-awareness or demonstrate a wide range of human-level cognitive abilities, and is merely an (arguably) intelligent, more specific problem-solver.

Some argue that weak AI programs cannot be called "intelligent" because they cannot really think. In response to claims that weak AI software such as Deep Blue are not really thinking, Drew McDermott wrote:

"Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings." [1]

He argued that Deep Blue does possess intelligence and is simply lacking breadth of intelligence.

Others note that Deep Blue is merely a powerful, heuristic search tree, stating that claims of it "thinking" about chess are similar to claims of single cells "thinking" about protein synthesis; both are unaware of anything at all, and both merely follow a program which has been encoded within them. Many among these critics are proponents of Weak AI, claiming that machines can never be truly intelligent, while other, Strong AI proponents simply state that true self-awareness and thought as we know it may require a specific kind of "program" designed to observe and take into account the processes of one's own brain. Many evolutionary psychologists point out that humans may have developed just such a program especially strongly for the purpose of social interaction or perhaps even deception, two behaviours at which humans are vastly superior to other animals.

General artificial intelligence[]

General artificial intelligence research aims to create AI that can replicate humans intelligence completely, often called an Artificial General Intelligence (AGI) to distinguish from less ambitious AI projects. As yet, researchers have devoted little attention to AGI, many claiming intelligence is too complex to be completely replicated. Some small groups of computer scientists are doing some AGI research, however. Organizations pursuing AGI include the Adaptive AI, Artificial General Intelligence Research Institute (AGIRI), CCortex, CodeSimian, Novamente LLC and the Singularity Institute for Artificial Intelligence. One recent addition is Numenta, a project based on the theories of Jeff Hawkins, the creator of the Palm Pilot. While Numenta takes a computational approach to general intelligence, Hawkins is also the founder of the RedWood Neuroscience Institute, which explores conscious thought from a biological perspective.

By most measures, demonstrated progress towards strong AI has been limited. No system has come close to passing a full Turing test, for instance. Few active AI researchers are prepared to publicly predict whether, or when, such systems will be developed, perhaps due to the failure of bold, unfulfilled predictions for AI research progress in past years.

Philosophy of strong AI and consciousness[]

John Searle and most others involved in this debate address whether a machine that works solely through the transformation of encoded data could be a mind, not the wider issue of monism versus dualism (i.e., whether a machine of any type, including biological machines, could contain a mind).

Searle states in his Chinese room argument that information processors carry encoded data which describe other things. The encoded data itself is meaningless without a cross reference to the things it describes. This leads Searle to point out that there is no meaning or understanding in an information processor itself. As a result Searle claims that even a machine that passed the Turing test would not necessarily be conscious in the human sense.

Some philosophers hold that if Weak AI is possible then Strong AI must also be possible. Daniel C. Dennett argues in Consciousness Explained that if there is no magic spark or soul, then Man is just a machine, and he asks why the Man-machine should have a privileged position over all other possible machines when it comes to intelligence or 'mind'. In the same work, he proposes his Multiple Drafts Model of consciousness. Simon Blackburn in his introduction to philosophy, Think, points out that you might appear intelligent but there is no way of telling if that intelligence is real (i.e., a 'mind'). However, if the discussion is limited to strong AI rather than artificial consciousness it may be possible to identify features of human minds that do not occur in information processing computers.

Many strong AI proponents believe the mind is subject to the Church-Turing thesis. This belief is seen by some as counter-intuitive and even problematic, because an information processor can be constructed out of balls and wood. Although such a device would be very slow and failure-prone, it could do anything that a modern computer can do. If the mind is Turing-compatible, it implies that, at least in principle, a device made of rolling balls and wooden channels can contain a conscious mind.

Roger Penrose attacked the applicability of the Church-Turing thesis directly by drawing attention to the halting problem in which certain types of computation cannot be performed by information systems yet are alleged to be performed by human minds.

Ultimately the truth of Strong AI depends upon whether information processing machines can include all the properties of minds such as consciousness (Rampant AI). However, Weak AI is independent of the Strong AI problem and there can be no doubt that many of the features of modern computers such as multiplication or database searching might have been considered 'intelligent' only a century ago.

Methods of production[]

Computer simulating human brain model[]

This is seen by many as the quickest means of achieving strong AI, as it doesn't require complete understanding. It would require three things:

  • Hardware. An extremely powerful computer would be required for such a model. Futurist Ray Kurzweil estimates 1 million MIPS. If Moore's law continues, this will be available for £1000 by 2020.
  • Software. This is usually considered the hard part. Firstly it relies on the assumption that the human mind is the central nervous system and is governed by physical laws.
  • Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.

Once such a model is built, it will be easily altered and thus open to trial and error experimentation. This is likely to lead to huge advances in understanding, allowing the model's intelligence to be improved/motivations altered.

Current research in the area is using one of the fastest supercomputer architechtures in the world, namely the Blue Gene platform created by IBM to simulate a single Neocortical Column consisting of approximately 60,000 neurons and 5km of interconnecting synapses. More information can be found here: http://bluebrainproject.epfl.ch/

The eventual goal of the project is to use supercomputers to simulate an entire brain.

Prospective applications[]

Seed AI/technological singularity[]

A strong AI which performs recursive improvement would increase in intelligence indefinitely and exponentially, starting on the human level. The vastly superhuman intelligence thus produced would be capable of developing technology at a far faster rate than human scientists. Arguably it would be impossible for humans of relatively minuscule intelligence to predict what it would come up with - thus the term singularity.

Assuming that the functional human model approach is taken, some modifications will need to be made before this can occur.

The most significant being alterations to its motivations. Evolutionary psychology holds that humans are entirely motivated by an intricate set up of, 'desire for anticipation of pleasure' and 'desire for anticipation of pain avoidance' developed by natural selection. From this, it is posited, stems all human desires.

With an understanding of the model, all the desires of the model could be removed and new ones added - recursive self improvement being necessary for a technological singularity. Arguably the most important thing is to equip the Seed AI with only the desire to serve mankind - implicit in this is self improvement. For this reason the Singularity Institute for Artificial Intelligence was set up.

Note - if evolutionary psychology is wrong, we will be able to find out from the model.

The Arts[]

A strong AI may be able to produce original works of music, art, literature and philosophy. It should be noted however, that a strong AI is not a necessary requirement for the creation of novel works of art. There have already been weak AI painting programs created that have been able to manipulate a paintbrush through external hardware in order to paint original, non-random and interesting pieces of art. AAron is one example of such software. More information can be found here: http://www.stanford.edu/group/SHR/4-2/text/cohen.html

Cognitive Robotics[]

Cognitive Robotics involves applying various fields of Artificial Intelligence to Robotics. Strong AI in particular would be a great asset to this field.

Comparison of computers to the human brain[]

Parallelism vs speed[]

The brain gets its power from performing many parallel operations, a computer from performing operations very quickly.

The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses. Although estimates of the brain's processing power put it at around 10^14 neuron updates per second,[2] it is expected that the first unoptimized simulations of a human brain will require a computer capable of 10^18 FLOPS. By comparison a general purpose CPU (circa 2006) operates at a few GFLOPS.

However, a neuron is estimated to spike 200 times per second (this giving an upper limit on the number of operations).[How to reference and link to summary or text] Signals between them are transmitted at a maximum speed of 150 meters per second. A modern 2GHz processor operates at 2 billion cycles per second or 10,000,000 times faster than a human neuron and signals in electronic computers travel at roughly the speed of light (300 000 kilometres per second). Even so, the limited number of transistors and their functional properties mean that they cannot replicate human brain functions.

If nanotechnology allowed the construction of devices of similar size and parallelism to the brain running at the speed of a modern computer, then a human model within would experience time as if it were occurring more slowly than it really was (relative to how humans experience time). Thus, an artificial brain could experience the elapsing of 1 minute as actually taking much longer, perhaps as if it were several hours. However, since the perception of how long something takes is different from the actual duration of the time period, how the artificial brain perceives the time period would depend entirely on the calculations and specific type of cognition during that time period.

External links[]

References[]

  1. J. Searle in Minds, Brains and Programs. The Behavioral and Brain Sciences, vol. 3, 1980.
  2. "Artificial Intelligence, A Modern Approach", Stuart Russel and Peter Norvig, Prentice Hall, Inc. 1995.

sv:Strong AI

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement