Psychology Wiki
Register
Advertisement

A chatterbot (or chatbot) is a type of conversational agent, a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Though many appear to be intelligently interpreting the human input prior to providing a response, most chatterbots simply scan for keywords within the input and pull a reply with the most matching keywords or the most similar wording pattern from a local database. Chatterbots may also be referred to as talk bots, chat bots, or chatterboxes.

Method of operation[]

A good understanding of a conversation is required to carry on a meaningful dialog but most chatterbots do not attempt this. Instead they "converse" by recognizing cue words or phrases from the human user, which allows them to use pre-prepared or pre-calculated responses which can move the conversation on in an apparently meaningful way without requiring them to know what they are talking about.

For example, if a human types, "I am feeling very worried lately," the chatterbot may be programmed to recognize the phrase "I am" and respond by replacing it with "Why are you" plus a question mark at the end, giving the answer, "Why are you feeling very worried lately?" A similar approach using keywords would be for the program to answer any comment including (Name of celebrity) with "I think they're great, don't you?" Humans, especially those unfamiliar with chatterbots, sometimes find the resulting conversations engaging. Critics of chatterbots call this engagement the ELIZA effect.

Some programs classified as chatterbots use other principles. One example is Jabberwacky, which attempts to model the way humans learn new facts and language. ELLA attempts to use natural language processing to make more useful responses from a human's input. Some programs that use natural language conversation, such as SHRDLU, are not generally classified as chatterbots because they link their speech ability to knowledge of a simulated world. This type of link requires a more complex artificial intelligence (eg., a "vision" system) than standard chatterbots have.

Early chatterbots[]

The classic early chatterbots are ELIZA and PARRY. More recent programs are Racter, Verbots, A.L.I.C.E., and ELLA.

The growth of chatterbots as a research field has created an expansion in their purposes. While ELIZA and PARRY were used exclusively to simulate typed conversation, Racter was used to "write" a story called The Policeman's Beard is Half Constructed. ELLA includes a collection of games and functional features to further extend the potential of chatterbots.

The term "ChatterBot" was coined by Michael Mauldin (Creator of the first Verbot, Julia) in 1994 to describe these conversational programs in a conference paper written for the Twelfth National Conference on Artificial Intelligence.

Malicious chatterbots[]

Malicious chatterbots are frequently used to fill chat rooms with spam and advertising, or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, .NET Messenger Service, AOL Instant Messenger and other instant messaging protocols. There has been a published report of a chatterbot used in a fake personal ad on a dating service's website.[1]

Chatterbots in modern AI[]

Most modern AI research focuses on practical engineering tasks. This is known as weak AI and is distinguished from strong AI, which would require sapience and reasoning abilities.

One pertinent field of AI research is natural language. Usually weak AI fields employ specialised software or programming languages created for them. For example, one of the 'most-human' natural language chatterbots, A.L.I.C.E., uses a programming language called AIML that is specific to its program, and its various clones, named Alicebots. Nevertheless, A.L.I.C.E. is still based on pattern matching without any reasoning. This is the same technique ELIZA, the first chatterbot, was using back in 1966.

Another notable program, known as Jabberwacky, may be a little closer to strong AI, as it is claimed to learn new responses based on user interactions, rather than being driven from a static database like many other exisiting chatterbots. Although such programs show initial promise, many of the existing results in trying to tackle the problem of natural language still appear fairly poor, and it seems reasonable to state that there is currently no general purpose conversational artificial intelligence. This has led some software developers to focus more on the practical aspect of chatterbot technology - information retrieval.

A common rebuttal often used within the AI community against criticism of such approaches asks, "How do we know that humans don't also just follow some cleverly devised rules?" (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument.

See also[]

References[]

  1. From Russia With Love. URL accessed on October 23, 2007. Psychologist and Scientific American: Mind contributing editor Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of jibberish, and she still replied in general terms. The dating website is not named. Scientific American: Mind, October-November 2007, page 16-17, "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer. Also available online.

External links[]

Advertisement