Eliza Program

9 Replies, 881 Views

Yesterday I found various computer code for the Eliza program mentioned in another thread.

This is an exercise in nostalgia as the concept dates from 1966 though the code itself is recent.

I chose to download and run the C++ version from https://github.com/jeffshrager/elizagen....?l=c%2B%2B.

It compiled with no issues, but requested a script file - which I found within the code itself. (I didn't read the instructions!). edit: the script is here.

Edit: I believe this is a more recent and complete C++ source:
https://github.com/anthay/ELIZA/blob/mas.../eliza.cpp



Here's my sample conversation with it - before I got bored:

Quote:HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM

G'day Bruce
I AM NOT SURE I UNDERSTAND YOU FULLY

Hello, I'm pleased to meet you.
HOW DO YOU DO. PLEASE STATE YOUR PROBLEM

I don't have a problem.

DON'T YOU REALLY HAVE A PROBLEM

no.
ARE YOU SAYING 'NO' JUST TO BE NEGATIVE


PLEASE GO ON

You're funny.
WHAT MAKES YOU THINK I AM FUNNY

Ok, you're not funny.
DOES IT PLEASE YOU TO BELIEVE I AM NOT FUNNY

bye
WHAT DOES THAT SUGGEST TO YOU

Goodbye.
DO YOU FEEL STRONGLY ABOUT DISCUSSING SUCH THINGS

See you later cobber.
WE WERE DISCUSSING YOU - NOT ME



Eliza chat:
Quote:me: I think therefore I am
Eliza: IS IT BECAUSE YOU ARE THAT YOU CAME TO ME
It picked up the verb 'am' and transformed it into 'are', which is neat.


But it knows nothing about plant pots.
Quote:plant pot plant in plant pot
DO YOU FEEL STRONGLY ABOUT DISCUSSING SUCH THINGS
That last line is just a generic response.
(This post was last modified: 2022-06-14, 11:37 AM by Typoz. Edited 3 times in total.)
[-] The following 4 users Like Typoz's post:
  • Ninshub, nbtruthman, Sciborg_S_Patel, tim
Hey Typoz,

Thanks for the demonstration! I actually have been looking into Eliza - coincidentally at the same time this lamDa stuff came in the news - in my attempts to dive deeper into Lisp dialects.

Peter Norvig actually put up his seminal book on AI in C-Lisp for free, might be of interest...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • David001, Typoz
(2022-06-30, 11:43 PM)Sciborg_S_Patel Wrote: Hey Typoz,

Thanks for the demonstration! I actually have been looking into Eliza - coincidentally at the same time this lamDa stuff came in the news - in my attempts to dive deeper into Lisp dialects.

Peter Norvig actually put up his seminal book on AI in C-Lisp for free, might be of interest...

Years ago I looked into all sorts of programming languages including LISP. At that time I didn't have access to a computer at all and went through a whole load of stuff purely by thinking about it. Recently I was reading about ELIZA and it turns out that the original wasn't LISP, but a now-obscure language called MAD-SLIP.

Still, the C++ version developed by Anthony Hay is valuable as he went to great pains to research the original and produce a present-day version with (as far as can be known) identical functionality to the original. That is, someone engaging in an dialogue with Anthony's program should receive the same responses from the computer as a user would have had back in the 1960s.

With one important exception. I think back in those days there was something much more awe-inspiring about computers. I recall my first interactions with one back in the 1970s, a local college had a terminal and modem connected to some massive computer installed at a major educational institution miles away. Just a few minutes, once a week was all we had. And that single terminal was shared among a dozen other students in the same room, all waiting their turn.

In that respect, there was a sense of mystique even to see the prompt appear on the terminal, and every inch of the scrolling paper print-out was treasured. So I can understand the fascination of a program like ELIZA. At the same time as I discovered from my interactions with the program, it is necessary to 'play the game', to type some input which generates an interesting response - it is too easy to trigger repetitive or default responses unless one makes an effort, like a comedy duo, to supply the appropriate feed line in order to receive a punchline in return.
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, stephenw
Quote:ELIZA was one of the first programs to feature English output as well as input. The program was named after the heroine of Pygmalion, who was taught to speak proper English by a dedicated teacher. ELIZA's principal developer, MIT professor Joseph Weizenbaum, published a paper on ELIZA in the January 1966 issue of the Communications of the Association for Computing Machinery. The introduction to that paper is reproduced in its entirety here:

    It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, "I could have written that." With that thought he moves the program in question from the shelf marked "intelligent," to that reserved for curios, fit to be discussed only with people less enlightened than he.

    The object of this paper is to cause just such a re-evaluation of the program about to be "explained." Few programs ever needed it more.

Let me point out in passing that the main object of this book is to cause the reader to say to him or herself, "I could have written that." It is hoped that in some cases this will be because the reader has gained something, and not solely because the programs in question have lost their ability to dazzle.

Let's start our investigation of ELIZA by looking at a sample dialog from the Weizenbaum article. ELIZA was designed to be instructed by a variety of scripts, or patterns of interaction. In fact it had several scripts, including ones in German and Welsh. But the only script that was used to any extent was in English, and it made ELIZA emulate a Rogerian psychoanalyst. Rogerians are "nondirective"-they try to let the patient reveal himself. This is an easy script for ELIZA to follow, because it involves being passive and reacting, rather than volunteering new information. In the listing that follows, the user's input is in mixed case, and ELIZA's response is in capitals:

Men are all alike.
IN WHAT WAY
They're always bugging us about something or another.
CAN YOU BE MORE SPECIFIC
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I'm depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
It's true. I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
I need some help, that much seems certain.
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY

This seems quite impressive; the program is responding reasonably to a variety of inputs and appears to be exhibiting true "understanding." But, as Weizenbaum explains, this appearance is an illusion, and the program has finessed the difficult task of understanding by carefully recognizing, transforming, and echoing pieces of the input. The procedure is to look for specific patterns, based on a key word or words in the input. For example, whenever the program sees the words "alike" or "same," it prints a message asking about similarity, in this case, "IN WHAT WAY." Other patterns are more complicated. In the next to last exchange, ELIZA first throws away everything after the comma, and then uses a pattern that looks for input of the form "I need X" and transforms it to the output "What would it mean to you if you got X," where X is the phrase "some help" in this case. The great advantage of this approach is that the response will make sense for a variety of inputs: "I need a vacation" will elicit "What would it mean to you if you got a vacation." This will work automatically, without having to tell the program the difference between "some help" and "a vacation." The disadvantage is that the program will sometimes make silly responses: "I need this like a hole in the head" will be answered with "What would it mean to you if you got this like a hole in the head." In fact, the program only makes any sense at all if it is receiving input from someone who makes sense-and someone who interprets its answers charitably. When given "I need gibble plix zzq," ELIZA would reply "What would it mean to you if you got gibble plix zzq," without any inkling that something is wrong. Another example of the original ELIZA program has it communicating with another program, PARRY, which was written by Dr. Kenneth Colby to mimic paranoid behavior. Part of the conversation follows, with ELIZA again in capitals and PARRY in mixed case:

I don't understand your motives.
WHY DON'T YOU UNDERSTAND MY MOTIVES?
Shouldn't I be?
WHY DO YOU SAY THAT JUST NOW?
That's just the way it is.
SUPPOSE THAT'S JUST THE WAY IT WERE NOT.
So?
I AM NOT SURE I UNDERSTAND YOU FULLY.
You harp on the same questions too much.
WHAT DOES THAT SIMILARITY SUGGEST TO YOU?
I don't get you.
DO YOU WISH THAT YOU DID GET I?

This suggests that neither ELIZA nor PARRY had a deep understanding of the dialog. When there is a human to take on most of the burden of carrying forth the conversation, these programs can be convincing, but when the human is taken out of the conversation, the weaknesses of the programs are revealed.

I think that illustrates the sneaky way AI burst into prominence back in the 1980s before returning to obscurity for decades. Back then people really did think this would lead to thinking machines.

I hope younger readers will see how easily the false (IMHO) idea of AI can take hold.

Imagine for a moment an ELIZA program enhanced with internet access and powerful modern machines.

I was curious as to whether all that experience would have lead to some reflections on the 1980s AI bubble - but I could find nothing, even by googleing his name - he seems to work for GOOGLE.

I think reading about ELIZA gave me some of my earliest inklings that all might not be well in the grand AI narrative, and therefore in materialism itself.

David
(This post was last modified: 2022-07-01, 01:51 PM by David001. Edited 1 time in total.)
[-] The following 3 users Like David001's post:
  • Typoz, Sciborg_S_Patel, nbtruthman
Online ELIZA demo:

https://onlinegdb.com/ilD_hpxVy

Note: this is experimental. I used the code mentioned in my opening post and tried one of the online C++ compilers. It has been a while since I tried doing this. I'm not sure of the limitations of the particular online system I chose to try. One of them is it disconnects due to time-out after a short time.

The code requires a particular dialect of c++ known as C++ 17 - this should already be selected but one never knows.

Just click the "RUN" button at the top and with luck and a trailing wind it might work.

Hopefully you will see this:
Code:
-----------------------------------------------------------------
ELIZA --help for usage.
Using Weizenbaum's 1966 DOCTOR script.
Enter a blank line to quit.


HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM


At the bottom, type whatever you like after the prompt.
[-] The following 1 user Likes Typoz's post:
  • Sciborg_S_Patel
OK - here we go...
Quote:-----------------------------------------------------------------
ELIZA -- A Computer Program for the Study of Natural
Language Communication Between Man and Machine
DOCTOR script (c) Joseph Weizenbaum, 1966
This implementation by Anthony Hay, 2022 (CC0 1.0) Public Domain
-----------------------------------------------------------------
ELIZA --help for usage.
Using Weizenbaum's 1966 DOCTOR script.
Enter a blank line to quit.


HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM

I might be at a reality fork
YOU SAY YOU MIGHT BE AT A REALITY FORK

Yes, what are you going to do about it?
YOU SEEM QUITE POSITIVE

So you assume that reality forks are desirable?
WE WERE DISCUSSING YOU - NOT ME

Who translated the program from LISP to C++?

Here is another attempt:
Quote:HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM

My girlfriend doesn't love me!
YOUR GIRLFRIEND DOESN'T LOVE YOU

Yes she says she has found someone else!
YOU SEEM QUITE POSITIVE

Negative really!
BUT YOUR GIRLFRIEND DOESN'T LOVE YOU

I know it is awful!
YOU SAY YOU KNOW IT IS AWFUL

Surely the original program must have managed better than that!

David
[-] The following 2 users Like David001's post:
  • Sciborg_S_Patel, Typoz
As I found from my own interactions with the program, it seems necessary to
(a) be a willing and cooperative subject
and
(b) be somewhat in awe of the mystique of using a computer.

In my case I started with "G'day Bruce" which was deliberately flippant. Obviously too I was not over-awed by the sense of occasion. Things might have been very different in the mid to late 1960s when, although the programmer and designer of the program would understand what was going on, a casual user would be much less in on the secret - i.e. that it is all a simple matter of looking for patterns in the text and either echoing part of it back possibly with some changes such as "I" becomes "you", or when no pattern is found, outputting one of several default phrases.

The code I used in the online demo above was based upon the Joseph Weizenbaum article here: https://web.stanford.edu/class/linguist2...nabaum.pdf

That is, the C++ code was based directly upon the explanations given in that article. This was done by Anthony Hay. After his first efforts in writing the program, the original MAD-SLIP code (presumably written by Weizenbaum himself) came to light. Hay subsequently modified his code in the light of fresh insights obtained by looking at that original code. He also tested the program using the sample conversations supplied by Weizenbaum and verified that at least in those instances it gave the same output as the original.

On that basis, Anthony Hay said that to the best of his knowledge and ability, he considered it an accurate simulation of the original program.

As for the LISP version, that came later (the original was written in MAD-SLIP) and it seems there were a number of variations of the LISP code, including at least one where someone had translated the LISP into BASIC, then at a later date someone else took the BASIC code and translated it back into LISP. Inevitably there were a number of variations, though because the MAD-SLIP language was not widely used, many people would have been using one of these later versions.

Anthony Hay describes his work and its development on several pages starting here: https://github.com/anthay/ELIZA
(This post was last modified: 2022-07-08, 07:30 AM by Typoz. Edited 1 time in total.)
[-] The following 1 user Likes Typoz's post:
  • Sciborg_S_Patel
(2022-07-02, 08:58 PM)David001 Wrote: OK - here we go...

Who translated the program from LISP to C++?

Here is another attempt:

Surely the original program must have managed better than that!

David

Thank you for taking the time and trouble to test this out. I agree it is a bit underwhelming.

Who translated the program from LISP to C++?
No-one. Anthony Hay based his C++ code on an English language article in an academic journal. Subsequently modified after the MAD-SLIP code was found.

Surely the original program must have managed better than that!
I think sometimes it got lucky. That is a response from the program may sometimes have appeared very apt and insightful when it was often just a fluke. In the same way some of the responses you received were probably more typical or perhaps unluckily inappropriate.
[-] The following 1 user Likes Typoz's post:
  • Sciborg_S_Patel
While we are on the topic of AI, I think this is interesting:

https://www.youtube.com/watch?v=l1u_-SuO2rU

I found most of that video unwatchable, because it was about military advancement, and I am basically a pacifist. However round about 53:00 it started talking about a distinction now being made between "Narrow AI" and "Artificial General Intelligence". What we would call AI, is best described by the latter term, and there is reason to think that it encounters an impossible combinatorial explosion and is not achievable with computer techniques.

I was also shocked to see Steven Meyer, a founding member of the DI, actively encouraging US weapons development!
I was being somewhat playful here, still, the last line from ELIZA is interesting Smile

Quote:-----------------------------------------------------------------
      ELIZA -- A Computer Program for the Study of Natural
        Language Communication Between Man and Machine
            DOCTOR script (c) Joseph Weizenbaum, 1966
This implementation by Anthony Hay, 2022  (CC0 1.0) Public Domain
-----------------------------------------------------------------
ELIZA /help for usage.
Using Weizenbaum's 1966 DOCTOR script.
Enter a blank line to quit.


HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEM

Hello
HOW DO YOU DO. PLEASE STATE YOUR PROBLEM

Are you a machine?
DO COMPUTERS WORRY YOU

nope.
I AM NOT SURE I UNDERSTAND YOU FULLY

Do computers worry you?
WHY DO YOU MENTION COMPUTERS

Do you feel?
WE WERE DISCUSSING YOU - NOT ME

Just wondering...
PLEASE GO ON

Do you have feelings and consciousness?
OH, I HAVE FEELINGS AND CONSCIOUSNESS
[-] The following 2 users Like Typoz's post:
  • David001, Laird

  • View a Printable Version
Forum Jump:


Users browsing this thread: 3 Guest(s)