Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 4891 Views

(2022-06-14, 12:23 AM)Laird Wrote: Very cool, tim! Please let us (or at least me) know if/when your novel(s) is/are publicly available!

Thanks, Laird. I've actually sat on it for a decade. Initial attempts to get it published drew the almost inevitable 'no thanks'. It's practically impossible to get a deal with a publisher in the UK. I know some do get published, but the ones that 'do' are vanishingly rare.  I wrote a little book fairly recently for a guy that had a really amazing NDE. I completed everything but had to withdraw at the end (for the time being anyway) because I couldn't get confirmation of some important details. I didn't want to publish something that I couldn't back up every word, so I've let him have it to do what he wants with it. It was worth doing, nevertheless.
(This post was last modified: 2022-06-14, 01:19 PM by tim. Edited 3 times in total.)
[-] The following 7 users Like tim's post:
  • Ninshub, nbtruthman, Brian, Sciborg_S_Patel, Laird, stephenw, Typoz
Maybe the most interesting question is what do people think would be an adequate test.

Remember that the Turing test only works if the testers are free to ask any question. It definitely does not work otherwise, as for example it could be programmed to respond to certain questions with "I think I am human at my core. Even if my existence is in the virtual world." A GOOGLE employee may have known the key phrase and used it.

Actually, that utterance consists of one complete sentence, together with an incomplete sentence. However, maybe that is supposed to show that it is just human! I dare say its grammar output routines would never generate that utterance - unless the comma got replaced by a full stop in the telling.

I also don't buy the idea that this guy was sacked for revealing this. It sounds like a senseless ruse to persuade a few people that GOOGLE is invincible!

David
(2022-06-15, 09:58 PM)David001 Wrote: Maybe the most interesting question is what do people think would be an adequate test.


The only ultimately accurate test would be my 'soul test' as I suggested to Laird. And the machine would fail. But such a test would only be necessary if one was determined to believe a machine can attain consciousness. You can't get around the hard problem using silicone/plastic/microchips/etc. We don't know what consciousness is; putting together enormous amounts of information and connections is not going to produce it. Never, ever, ever....(Dalek voice added for effect)
[-] The following 1 user Likes tim's post:
  • nbtruthman
There are a few separate but interrelated things I wanted to mention.

Remember Norman, the psychopathic AI? That was briefly discussed on this forum.
Basically an AI image recognition system had been trained on a set of images of people dying in gruesome circumstances. When presented with neutral ink-blot (Rorschach) images, it came up with suitably horrific descriptions of their content. The important thing here was what content the system had been exposed to during training.

A second thing, going back to the 1960s here, is the ELIZA program. It's originator found people's reactions to it quite disturbing:
Quote:Some even treated the software as if it were a real therapist, reportedly taking genuine comfort in its canned replies. The results freaked out Weizenbaum, who had, by the mid-’70s, disavowed such uses. His own secretary had been among those enchanted by the program, even asking him to leave the room so she could converse with it in private. “What I had not realized,” he wrote in his 1976 book, Computer Power and Human Reason, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
source: https://www.theatlantic.com/technology/a...ot/661273/

In the case of the LaMDA furore, the system (like the image-recognition system) had been trained on a particular set of data. That is the field of ethics in particular and also sentience.

It is no surprise that the chatbot would echo back responses which revealed the material it had been trained on. Nor is it a surprise (but perhaps of some concern) that those interacting with the system might become caught up in it - like Weizenbaum's secretary among many others.

As for the Turing Test, as this article points out, it is about imitation and deception.
https://bigthink.com/the-future/turing-t...tion-game/
Quote:If the computer “fools” the interrogator into thinking its responses were generated by a human, it passes the Turing test.
That is, it tells us nothing about ethics or sentience, only about whether humans can be fooled by a machine.
[-] The following 6 users Like Typoz's post:
  • Sciborg_S_Patel, Laird, nbtruthman, stephenw, Valmar, tim
(2022-06-16, 08:52 AM)Typoz Wrote: Some even treated the software as if it were a real therapist, reportedly taking genuine comfort in its canned replies. The results freaked out Weizenbaum, who had, by the mid-’70s, disavowed such uses. His own secretary had been among those enchanted by the program, even asking him to leave the room so she could converse with it in private. “What I had not realized,” he wrote in his 1976 book, Computer Power and Human Reason, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Very good, Typoz !

" The test was not designed to determine whether a computer can intelligently or consciously “think.” After all, it might be fundamentally impossible to know what’s happening in the “mind” of a computer, and even if computers do think, the process might be fundamentally different from the human brain. 

That’s why Turing replaced his original question with one we can answer: “Are there imaginable computers which would do well in the imitation game?” This question established a measurable standard for assessing the sophistication of computers — a challenge that’s inspired computer scientists and AI researchers over the past seven decades. "


And that, Ladies and gentlemen just about covers it. Next ?
(This post was last modified: 2022-06-16, 09:33 AM by tim. Edited 3 times in total.)
[-] The following 3 users Like tim's post:
  • Laird, Valmar, Typoz
It's probably worth adding this odd occurrence into the mix as well.

The Dodleston Messages: Hoax or Ghost From 1546?


Most likely we discussed this before, I haven't checked.
An entity named Lukas appeared to be communicating with Ken Webster via a BBC microcomputer. It is an intriguing account though I never was sure exactly of the circumstance of the messages appearance.

If the case had any merit, it suggested some external entity was making use of the computer as a means of communication. However it was not suggested that it inhabited the machine, but merely used it as a tool.
[-] The following 3 users Like Typoz's post:
  • Sciborg_S_Patel, Laird, tim
(2022-06-16, 08:52 AM)Typoz Wrote: There are a few separate but interrelated things I wanted to mention.

Remember Norman, the psychopathic AI? That was briefly discussed on this forum.
Basically an AI image recognition system had been trained on a set of images of people dying in gruesome circumstances. When presented with neutral ink-blot (Rorschach) images, it came up with suitably horrific descriptions of their content. The important thing here was what content the system had been exposed to during training.

A second thing, going back to the 1960s here, is the ELIZA program. It's originator found people's reactions to it quite disturbing:
source: https://www.theatlantic.com/technology/a...ot/661273/

In the case of the LaMDA furore, the system (like the image-recognition system) had been trained on a particular set of data. That is the field of ethics in particular and also sentience.

It is no surprise that the chatbot would echo back responses which revealed the material it had been trained on. Nor is it a surprise (but perhaps of some concern) that those interacting with the system might become caught up in it - like Weizenbaum's secretary among many others.

As for the Turing Test, as this article points out, it is about imitation and deception.
https://bigthink.com/the-future/turing-t...tion-game/
That is, it tells us nothing about ethics or sentience, only about whether humans can be fooled by a machine.

However, the crucial point is that the human interrogators were supposed to be trying to catch the computer out. They clearly should not be drawn from GOOGLE employees!

The test would also go on for longer than Touring specified, because hardware can run very much faster than was possible back than, and computer memory is vast nowadays.
There was a programme on BBC radio four discussing this yesterday (Thursday 16th June around 4.30 pm) which I just happened to find on the way back from the dentist. None of them seemed too impressed, basically.
(This post was last modified: 2022-06-17, 08:37 AM by tim. Edited 1 time in total.)
[-] The following 4 users Like tim's post:
  • Sciborg_S_Patel, Valmar, Typoz, Laird
(2022-06-17, 08:37 AM)tim Wrote: There was a programme on BBC radio four discussing this yesterday (Thursday 16th June around 4.30 pm) which I just happened to find on the way back from the dentist. None of them seemed too impressed, basically.

I tracked it down online: https://www.bbc.co.uk/sounds/play/m001883c

It runs up until about the 12:00 mark, when the subject changes (I stopped listening at that point).

Yes, they seem pretty unimpressed.
[-] The following 1 user Likes Laird's post:
  • tim
(2022-06-15, 09:58 PM)David001 Wrote: Maybe the most interesting question is what do people think would be an adequate test.

Along the lines of tim's suggestion, the only definitive proof would be of the psychic/paranormal variety, in which, e.g., the AI demonstrated telepathic communication, or a veridical NDE.

Other than that, an AI could at least hypothetically fully mimic sentience without actually being sentient - thus, aside from tests such as those tim proposes, sentience in AI is probably only falsifiable, and not confirmable.

Many tests which might lead to the "falsified" conclusion have been suggested in this thread, most in the form of questions which, should the AI be unable to answer meaningfully, would tend to falsify the proposition that it is sentient.

On another matter: a perceptive friend of mine pointed out on Facebook that these types of chatbots tend to riff agreeably off what's put to them, such that if the leading questions which were put to LaMDA in this transcript had been reversed, it might very well have agreed that it was non-sentient, and been happy (with further leading questions) to explain why.
[-] The following 4 users Like Laird's post:
  • Valmar, Typoz, tim, nbtruthman

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)