Rava is an online news portal providing recent news, editorials, opinions and advice on day to day happenings in Pakistan.
At what time does the nearest supermarket close? Do you need to leave today with an umbrella? What action movies are you now giving in the cinema?
You may be used to asking your voice assistant these kinds of questions to avoid investigating the answers by yourself.
However, although conversing with a machine may seem like a relatively recent development, you will be surprised to learn that the first conversational (or chatbot) bot , that is, the first program designed to respond automatically and coherently, whatever we asked, was invented more than 50 years ago.
Like Siri, Alexa, Echo, Cortana and other digital assistants, this first artificial intelligence conversational program had a woman’s name.
Her name was Eliza , in honor of the character Elsa Doolittle from the play Pygmalion, by Irish playwright George Bernard Shaw, who with the help of a teacher transforms in six months from a street vendor to a lady of high society.
Eliza, the psychologist
Eliza was born in the United States in 1966 .
It was the brainchild of Joseph Weizenbaum, a professor of computer science at the Massachusetts Institute of Technology ( MIT ), who wanted to parody psychotherapists who followed the guidelines of psychologist Carl Rogers, creator of the call. “Therapy focused on the client”.
Weizenbaum’s idea was for Eliza to converse (in writing) with her interlocutor in a way that gave the impression that she was listening to him and that she empathized with his problems and anxieties.
With this, I wanted to show how superficial communication between man and machine could be.
To achieve a non-robotic conversation, as close as possible to a dialogue between two human beings, Eliza used several resources.
One of his secrets was to recognize key words , and then ask questions related to that topic.
For example, when the speaker mentioned the word father or mother in his speech, Eliza recognized that he was talking about the family and then asked questions related to that question.
He also had stored phrases on a variety of topics, which he used when recognizing key words, as well as empathic and continuity phrases.
The latter -such as: talk to me more about that, continue, etc.- were always useful to give the sensation of showing interest and making the subject continue talking as if he were having a real conversation.
If Eliza did not find associated words in her database, she used phrases like Why do you say that? or are you sure? and other combinations of similar words.
And, like many psychotherapists do, he would turn the phrase you said and repeat it, transforming it into a question.
This is an example of a dialogue with Eliza:
Eliza: I’m the psychotherapist. Please, describe your problems to me. Every time you finish speaking, write RET twice.
Interlocutor: I am unhappy.
Eliza: Do you often feel unhappy?
Elisa: Can you develop it more?
Interlocutor: Too much work.
Elisa: Why do you say a lot of work?
Interlocutor: I’m working too much.
Elisa: You said before that you felt unhappy.
Elisa: I would like you to continue talking.
The program, however, had its limitations.
He was unable to memorize and could not learn from their conversations.
Even so, and although Weizenbaum explained how the program worked to find the right answers, he was surprised when he noticed that the users assumed that Eliza understood them.
Some even told the program about their problems and intimacies.
10 years later, the computer turned his experiences with Eliza in a book, Computer Power and Human Reason (The power of computers and human reason, in Spanish), in which he expressed his ambivalence about this technology and where he made it clear that We should never give computers the power to make important decisions, as they lacked fundamental human qualities such as compassion and wisdom .
His first attempts were later followed by other experiments aimed at humanizing computers.