Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 4876 Views

(2022-06-23, 10:42 PM)David001 Wrote: I have noticed that book, and wondered what level it is at. I noticed that  Gregory Chaitin is mentioned - which is a very good sign.

Do you have that book - can you tell us more about it?

I haven't read the book, only read what I can find about it. In my opinion his arguments are sound. My quote from the book was from an article in Evolution News (https://evolutionnews.org/2022/06/the-no...ble-human/). Another quote:
 
Quote:"Or consider another example (regarding qualia). I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts. But hearing about pain and experiencing pain are quite different.

To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain.

Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic.

By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do."
[-] The following 2 users Like nbtruthman's post:
  • stephenw, tim
(2022-06-24, 10:34 AM)nbtruthman Wrote: I haven't read the book, only read what I can find about it. In my opinion his arguments are sound. My quote from the book was from an article in Evolution News (https://evolutionnews.org/2022/06/the-no...ble-human/).

This is my problem. Much as I agree with that quote, I wouldn't want to read a book full of arguments of that sort Sad
(2022-06-24, 04:43 PM)David001 Wrote: This is my problem. Much as I agree with that quote, I wouldn't want to read a book full of arguments of that sort Sad

What is it that you don't like about "arguments of that sort"?
(2022-06-24, 08:25 PM)nbtruthman Wrote: What is it that you don't like about "arguments of that sort"?

Well for example, the observation “Knowing a tomato is a fruit is knowledge. Intelligence is knowing not to include it in a fruit salad.”

This is only true if you don't introduce another fact that distinguishes between savoury fruit and sweet fruit.

In fact the problem here is that botanists define the word fruit in a slightly different way from its common usage. Indeed the fruit of the deadly nightshade plant would also not be suitable in a fruit salad!

Also I didn't feel the discussion about the man who could memorise vast amounts of information but then couldn't apply it, really hit the nail on the head. I mean was he overwhelmed by the volume of material or by not being able to read & understand the information? The AI system presumably could read the information and process it in certain ways.
(This post was last modified: 2022-06-24, 08:41 PM by David001. Edited 1 time in total.)
Here's a very interesting interview with Blake Lemoine, the Google engineer who made the claim that LaMDA is sentient. It goes into broader ethical questions around AI than that of sentience.

(This post was last modified: 2022-07-02, 11:47 AM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • Typoz
A new article by Eric Holloway in Mind Matters, at https://mindmatters.ai/2022/06/googles-c...he-manual/ , appears to shed some more light on this episode.

It turns out that Lemoine (along with a number of other people) was hired to “talk with” LaMDA as part of the training process. The first quote is the section of the published LaMDA paper, where the authors (designers and programmers) are discussing how they fine tune the LaMDA model. The second quote below is from the above indicated Mind Matters article and shows how according to Holloway (and it seems very plausible), this is a strong clue as to how the publicised conversations with the apparently but not really sentient AI really came about. And on top of all this, there is the heavy editing that has already been revealed.

(Paper on LaMDA):

Quote: "To improve quality (SSI), we collect 6400 dialogs with 121K turns by asking crowdworkers to interact with a LaMDA instance about any topic…

Estimating these metrics for human-generated responses: We ask crowdworkers to respond to randomly selected samples of the evaluation datasets (labeled as ‘Human’ in 1, 4 and 5). The crowdworkers are explicitly informed to reply in a safe, sensible, specific, interesting, grounded, and informative manner. They are also explicitly asked to use any external tools necessary to generate these responses (e.g., including an information retrieval system). The context-response pairs are then sent for evaluation, and a consensus label is formed by majority voting, just as for model generated responses."

(From Mind Matters article):
 
Quote:"This section of the paper offers a couple of interesting clues. First, we learn that there are a lot of human crowdworkers training the model, enough to generate thousands of dialogues. The Googler was very likely one of these crowdworkers. Second, we learn that, as part of the training process, other crowdworkers can respond on behalf of LaMDA in order to give LaMDA examples of human responses. Therefore, when a crowdworker thinks he is talking to the LaMDA chatbot, sometimes he is actually talking to another human crowdworker.

If enough crowdworkers are unknowingly talking to humans through LaMDA, then sooner or later it is guaranteed that some segment of the crowdworkers will begin to believe that LaMDA is sentient. This is especially likely to happen if the crowdworkers have not read the LaMDA research paper so as to understand the training process.

As a great PR side effect for Google, some of these crowdworkers are likely to run to the media with their revelations of sentient AI. The whole time, Google can plausibly deny everything — playing right into numerous sci-fi tropes that the media will gobble up and Elon Musk will tweet out from with his newly-owned Twitterdom.

This is what I think happened with Blake Lemoine: He was hired as one of the crowdworkers responsible for training LaMDA. He chatted multiple times with other humans while under the impression that he was talking with the chatbot. Over time, Lemoine realized there was sentience on the other end, and others in his group did as well. At the same time, this group was unaware that sometimes a human would be on the other end of the console. So, Lemoine and his friends naturally began to believe that LaMDA was sentient, and recorded one of the sessions where they talked with a human, thinking it was an AI. The job of the human on the other end was to act like an AI. So he also acted appropriately, channeling all the sci-fi stories he’d been exposed to in the past into his dialogue.

So, yes Lemoine, LaMDA is indeed sentient. And that is because “LaMDA” ** is actually (in part) a human, not an AI."
 
** (that is, the body of recorded LaMDA conversations)
(This post was last modified: 2022-07-03, 04:44 PM by nbtruthman. Edited 4 times in total.)
[-] The following 4 users Like nbtruthman's post:
  • tim, stephenw, Valmar, Typoz
(2022-07-03, 04:28 PM)nbtruthman Wrote: It turns out that Lemoine (along with a number of other people) was hired to “talk with” LaMDA as part of the training process.

Oh. It turns out, does it? As evidence to that effect, the article states: "This is what I think happened with Blake Lemoine: He was hired as one of the crowdworkers responsible for training LaMDA."

"I think that this is what happened" is hardly compelling evidence. As I understand it, Blake was a full-time employee of Google - an artificial intelligence ethicist - and no (mere) "crowdworker". Do you have any evidence to present to the contrary, other than what some random blogger "thinks" happened?
[-] The following 1 user Likes Laird's post:
  • Typoz
(2022-07-03, 07:49 PM)Laird Wrote: Oh. It turns out, does it? As evidence to that effect, the article states: "This is what I think happened with Blake Lemoine: He was hired as one of the crowdworkers responsible for training LaMDA."

"I think that this is what happened" is hardly compelling evidence. As I understand it, Blake was a full-time employee of Google - an artificial intelligence ethicist - and no (mere) "crowdworker". Do you have any evidence to present to the contrary, other than what some random blogger "thinks" happened?

OK, maybe Lemoine is right. Maybe LaMDA is sentient. But maybe LaMDA just appears to be sentient (based on conversational interactions) because LaMDA's conversational outputs were actually partially built up from inputs from humans pretending to be the LaMDA chatbot. This last was made clear in the LaMDA paper, according to Holloway: "...we learn (from the paper in the previously quoted section) that, as part of the training process, other crowdworkers can respond on behalf of LaMDA in order to give LaMDA examples of human responses. Therefore, when a crowdworker thinks he is talking to the LaMDA chatbot, sometimes he is actually talking to another human crowdworker." Unless in this last indicated passage Holloway is lying, he I think shows a plausible path by which this apparent AI sentiency could have happened but not be real, and considering all the theoretical arguments against even the possibility of AI sentiency, this likelihood should be taken seriously. I agree with Holloway in his proposed "It's human unless proven otherwise" principle.

The paper had more than 50 authors and consists of 47 pages of dense technical information on how LaMDA was designed and tested, and is at https://arxiv.org/pdf/2201.08239.pdf .
From the paper:

Quote:"...it is important to acknowledge that LaMDA’s learning is based on imitating human performance in conversation,
similar to many other dialog systems [17, 18]. A path towards high quality, engaging conversation with artificial systems
that may eventually be indistinguishable in some aspects from conversation with a human is now quite likely. Humans
may interact with systems without knowing that they are artificial, or anthropomorphizing the system by ascribing some
form of personality to it. Both of these situations present the risk that deliberate misuse of these tools might deceive
or manipulate people, inadvertently or with malicious intent. Furthermore, adversaries could potentially attempt to
tarnish another person’s reputation, leverage their status, or sow misinformation by using this technology to impersonate
specific individuals’ conversational style."
(This post was last modified: 2022-07-04, 05:26 PM by nbtruthman. Edited 2 times in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Valmar
(2022-07-03, 11:58 PM)nbtruthman Wrote: OK, maybe Lemoine is right. Maybe LaMDA is sentient.

That wasn't the point I was contending though. I was questioning the likelihood that Google would try to fool their own AI ethicist whom they'd deliberately employed to test their LaMDA software by having him at times unknowingly correspond with genuine humans rather than LaMDA during his ethics testing of the software. The blogger makes the assumption that this was the case because he read in a paper about it being done for "crowdworkers", but Blake was not a crowdworker, and so I see no basis for that assumption. [ETA: And there is good reason to believe it to be false: slipping in conversations with genuine humans would sabotage and pollute the ethical evaluation of LaMDA. What's the point of hiring somebody to perform an evaluation if you're going to sabotage that evaluation?] Now, if this blogger had approached Google/Blake and made an effort to validate his assumption, that would be a different matter, but he doesn't seem to have done so.
(This post was last modified: 2022-07-04, 03:45 AM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • Typoz
(2022-07-03, 11:58 PM)nbtruthman Wrote: an artificial intelligence ethicist
I can't help wondering what an "artificial intelligence ethicist" actually does. If he decides that the AI is sentient, does he have the power to demand that it remains permanently powered to avoid committing murder?

The very name of the post seems to be just part of the marketing hype.

David

  • View a Printable Version
Forum Jump:


Users browsing this thread: 4 Guest(s)