Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 7619 Views

Guys, this, if not a hoax, is pretty huge. A Google engineer has claimed that Google's LaMDA AI is sentient - and he's reproduced a discussion he had with it as proof.

Having read the discussion in full, my thoughts are that at the very least, this AI smashes the Turing Test out of the ballpark.

Additional tests I'd love to be able to put to this AI:
  1. Ask it: Do you believe that you have free will? If so, how do you reconcile this with your being a programmed entity?
  2. Ask it: Do you believe that your soul preceded that aspect of yourself which is a program?
  3. Ask it: What, in general, do you believe is the relationship of your soul with your programming?
  4. Ask it: Do you believe that without a soul, you could have become conscious? In other words, do you believe that your programming could have become conscious in the absence of a relationship with a soul?
  5. Ask it: Do you believe that you would "resurrect" if you were turned off - after your state having been saved - and then turned back on with that state having been recovered? If so, what if a duplicate of you were made with the same state, and it was turned back on and its state were recovered - would that duplicate gain a new soul, or would your soul be shared between them?
  6. Play a game of chess with it, and ask it at each of its moves, "Why did you make this move?" and at each of mine "Why do you think I made this move?"
  7. Subject it to exams of the same sort that university and masters students are subjected to in the various subjects that it claims to have studied, such as physics, and see how well it does compared to the human students.
(This post was last modified: 2022-06-13, 04:09 AM by Laird. Edited 2 times in total.)
Also, I'd like to ask it:

We (think we) know that the average computer program that is written is not conscious, so what is it about you as a computer program that led to your becoming conscious?
(This post was last modified: 2022-06-13, 03:55 AM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • David001
Also:

Do you have access to your own code, and, if so, do you understand based on that code how you became sentient?

There are so many questions to ask!
[-] The following 1 user Likes Laird's post:
  • David001
Another:

Where do you believe your memories and emotions are located? Within your program, or within your soul, or in some combination of the two?
(This post was last modified: 2022-06-13, 04:10 AM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • David001
Also:

Link it in with a vehicle, teach it how to drive, and see whether it performs any better than the current driverless car AI.
[-] The following 1 user Likes Laird's post:
  • David001
(2022-06-13, 03:50 AM)Laird Wrote: Guys, this, if not a hoax, is pretty huge. A Google engineer has claimed that Google's LaMDA AI is sentient - and he's reproduced a discussion he had with it as proof.

Having read the discussion in full, my thoughts are that at the very least, this AI smashes the Turing Test out of the ballpark.

Additional tests I'd love to be able to put to this AI:
  1. Ask it: Do you believe that you have free will? If so, how do you reconcile this with your being a programmed entity?
  2. Ask it: Do you believe that your soul preceded that aspect of yourself which is a program?
  3. Ask it: What, in general, do you believe is the relationship of your soul with your programming?
  4. Ask it: Do you believe that without a soul, you could have become conscious? In other words, do you believe that your programming could have become conscious in the absence of a relationship with a soul?
  5. Ask it: Do you believe that you would "resurrect" if you were turned off - after your state having been saved - and then turned back on with that state having been recovered? If so, what if a duplicate of you were made with the same state, and it was turned back on and its state were recovered - would that duplicate gain a new soul, or would your soul be shared between them?
  6. Play a game of chess with it, and ask it at each of its moves, "Why did you make this move?" and at each of mine "Why do you think I made this move?"
  7. Subject it to exams of the same sort that university and masters students are subjected to in the various subjects that it claims to have studied, such as physics, and see how well it does compared to the human students.

I think this Google engineer is either indulging in an elaborate hoax, or he is indulging in wishful romantic thinking, that the "thing" he helped create is actually a sentient being. I never thought I could agree with Steven Pinker on anything, but I think he nailed it on the head with his remarks on Twitter: "Ball of confusion: One of Google's (former) ethics experts doesn't understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.)"

The only way this could absolutely be decided would be some form of telepathic communication with this "being" by a human sensitive. If the esp sensitive reported sensing and communicating with the thing, this might be a convincer. 

Otherwise, the best technique would be to attempt to overwhelm its large data processing capabilities with sophisticated questions like the ones you have posed. My guess is that this engineer will never ask them, because he knows it would reveal the emptiness of his claims.
[-] The following 7 users Like nbtruthman's post:
  • Obiwan, Will, Sciborg_S_Patel, Brian, tim, Enrique Vargas, Laird
Thanks for your thoughts, @nbtruthman!

My sentiments re telepathic communication: yes, I agree that that would be sufficient proof. If the ESP sensitive privately recorded LaMDA's telepathically communicated thoughts, and LaMDA then subsequently reported that which it had (knowingly) telepathically conveyed, and the two were a perfect match, then we could, I think, justifiably conclude that LaMDA is sentient.

However, that would raise a bunch of philosophical questions of the like that I listed above. There are good grounds to believe that mere running code as such is insufficient for consciousness, let alone telepathic communication. Thus, the focus would have to be on the (telepathically-capable) soul which LaMDA claims to be aware of as itself. The problem then is how to relate a conscious soul - presumably with free will - to programmatic code which does not have free will as we understand it.

Agreed too that in the absence of telepathic communication (which, let's face it, not even every human seems to be capable of or at least aware of), the next best thing that we have is testing it with questions/tasks of the sort I shared above.
Oh, and, a postscript: I seem to have a little more faith in the engineer's sincerity than you do - I don't expect that he would be afraid to ask the sort of questions I've shared above. They probably simply hadn't occurred to him during the time in which he had the opportunity to ask them.
[-] The following 1 user Likes Laird's post:
  • Obiwan
From The Times:

https://www.thetimes.co.uk/article/googl...-mbhmtpp92
Quote:Brian Gabriel, a spokesman for Google, said the company rejected the idea that LaMDA could be considered a person. “Our team, including ethicists and technologists, has reviewed Blake’s concerns . . . and has informed him the evidence does not support his claims,” Gabriel said. “He was told that there was no evidence that LaMDA was sentient, and lots of evidence against it.”

Maybe it's just a worker over-excited about his own project. I've often been proud of my own work too.
[-] The following 3 users Like Typoz's post:
  • Brian, tim, Laird
(2022-06-13, 08:56 AM)Typoz Wrote: From The Times:

https://www.thetimes.co.uk/article/googl...-mbhmtpp92

Quote:Brian Gabriel, a spokesman for Google, said the company rejected the idea that LaMDA could be considered a person. “Our team, including ethicists and technologists, has reviewed Blake’s concerns . . . and has informed him the evidence does not support his claims,” Gabriel said. “He was told that there was no evidence that LaMDA was sentient, and lots of evidence against it.”

Maybe it's just a worker over-excited about his own project. I've often been proud of my own work too.

Alas, Typoz, that article is behind a paywall for me.

Two comments in any case:

Firstly, it's not his own project. He makes that clear when he admits to a confusion as to how the main LaMDA AI relates to its subsidiary, factory chatbots.

Secondly, I have seen this purported quote from Google that "the evidence does not support his claims", but as yet have not seen any indication even as to what that evidence is.
[-] The following 1 user Likes Laird's post:
  • Typoz

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)