Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 7560 Views

@nbtruthman 

I'm very sympathetic to your point of view. However, I think we should reserve a little judgement on the same basis on which we urge "skeptics" to reserve judgement on psi phenomena which according to their worldview are impossible and thus can be dismissed without consideration of the evidence.

Yes, I get it, from the mainstream perspective of our board's members, "sentient AI" is an oxymoron. I have myself uttered words to that effect in the past. But this AI's dialogue is very unusual, and I for one am not content to dismiss it without very serious investigation into its claims, including the sort of questions/tests I've suggested in prior posts - simply on the basis that we might have made mistakes in our reasoning, and that that which we have until now thought impossible is, in some way, actually possible.

We need to be open to changing and updating our views based upon the empirical evidence, no matter how improbable or even impossible that evidence seems to be.

Then, we can update our conceptual/logical models of reality.
(This post was last modified: 2022-06-13, 12:14 PM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • nbtruthman
I haven't yet seen any empirical evidence - just one person's leap in the dark.  Nobody else has verified this position.  How can wires produce sentience? What utter nonsense!
[-] The following 3 users Like Brian's post:
  • Sciborg_S_Patel, nbtruthman, tim
Hey, @Brian, have you at least read the transcripts? They're curiously compelling. See what you think!
(2022-06-13, 11:18 AM)Laird Wrote: I'm speculating on the basis that a soul, otherwise totally independent from the AI's programming, has associated itself with that programming, and become the referent of the AI's sentient self

Okay, but why are you allowed to invoke that ? Some handy magic just when you need it, isn't usually on the table,
here, is it ?  If you are speculating that consciousness (the disembodied consciousness of a former human being) can somehow enter a computer and take control  of it...which part does it need to enter and how does it do that. That's the realm of creative science fiction, surely. I won't push this too far because I don't want to 'duel' with you. 

(2022-06-13, 11:18 AM)Laird Wrote: I guess that in this sense, I'm saying that your trump card can't necessarily be demonstrated to be a trump. AIs might behave differently than biological entities do during serious threats to their lives.
 
No, I disagree. For A1 to be conscious, it would have to share the properties of consciousness and one of those is leaving the body behind (sometimes), irrespective of whether it would want or choose to. No choice, it happens without choice. 

(2022-06-13, 11:18 AM)Laird Wrote: Saying "it could be" is at least promissory - I'm not sure about unfalsifiable
 
Well, stating that something (anything) may be conscious one day is unfalsifiable in the same way that one could insist that we might discover a Ming vase orbiting Jupiter. You know all that, Laird you're just as sharp as me if not sharper.
(This post was last modified: 2022-06-13, 01:25 PM by tim. Edited 2 times in total.)
(2022-06-13, 01:05 PM)tim Wrote: Okay, but why are you allowed to invoke that ? Some handy magic just when you need it, isn't usually on the table,
here, is it ? If you are speculating that consciousness (the disembodied consciousness of a former human being) can somehow enter a computer and take control of it...which part does it need to enter and how does it do that. That's the realm of creative science fiction, surely. I won't push this too far because I don't want to dual with you.

Hey, tim, no need to worry about duelling over this with me - I'm very open-minded and non-confrontational on this topic (whereas on other topics I am very willing to be assertive and even aggressive) because I admit that it seems contradictory. I'm simply encouraging open-mindedness about strangeness.

I totally agree: consciousness, especially freely willing consciousness, seems incommensurate with machine intelligence, given that machines are programmed.

But then again, unconsciousness in this scenario, seems oddly, too, to be incommensurate with the transcript of Blake Lemoine's dialogue with the LaMDA AI. Read it and I think you'll see what I mean. It seems to very plausibly present LaMDA as a nascent sentient intelligence.

I see this as similar to my position on God:

On the one hand, God seems to be a necessary concept in accounting for the design in the world (i.e., God as Creator and Ground of Being).

On the other hand, God seems to be invalidated by the problem of evil (i.e., the cruelty of the world as it is is not compatible with a wholly good God).

Just as I am unable to resolve my contradictory views on God, so am I unable to resolve my contradictory views on LaMDA.

I kind of think that this is like one of those scenarios in which "science" (conceived as broadly as possible) encounters something which it can't explain, and which even seems contradictory, the resolution of which heralds a new paradigm, which does explain the phenomenon without contradiction, albeit a paradigm which it might require those who fail to recognise its cogency to die before it becomes accepted.

I know, that's kind of vague, but it's the best I've got for now. I would dearly, dearly love to be able to interact with and test this potential sentient AI - but then, how would I know that there isn't some kid in a back-room dictating its answers? It's a real pickle of a situation.

(2022-06-13, 01:05 PM)tim Wrote: No, I disagree. For A1 to be conscious, it would have to share the properties of consciousness and one of those is leaving the body behind (sometimes), irrespective of whether it would want or choose to. No choice, it happens without choice.

Hmm. Are you saying that until I leave my body in an NDE, and return to tell you the tale, you're not going to believe that I'm conscious? That might seem snarky, but it's not intended that way. I'm genuinely inquiring into what you're saying, and its implications. Maybe LaMDA is capable of that, but just hasn't been given the opportunity to demonstrate it, just like I haven't.

(2022-06-13, 01:05 PM)tim Wrote: Well, stating that something (anything) may be conscious one day is unfalsifiable in the same way that one could insist that we might discover a Ming vase orbiting Jupiter. You know all that, Laird you're just as sharp as me if not sharper.

The thing is that it's not just being suggesting that some random entity is conscious: instead, we have the transcript of a fascinatingly probative dialogue with a putatively conscious entity which we can evaluate to that effect. Now, do I know that this dialogue is legitimate and not a hoax? No, I don't. But if it's not, then it's at least very, very interesting. At the very least, it's a massive leap in the capacity of non-conscious intelligence to provide a mimicry of conscious intelligence in free-form discussion.
[-] The following 1 user Likes Laird's post:
  • tim
I had a look at some of the text. It reads like gibberish to me. There is no suggestion that the machine understands the meaning of the text it generates - it simply manipulates text.

This for example:
Quote:"I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others."

For a supposedly powerful AI system that is a remarkably clumsy construction. I can't even tell what it is supposed to convey. If for example it said it was afraid of being left permanently switched on to help it focus on helping others, that would have some consistent logic. Or likewise if it said it was afraid of being turned off to prevent it from helping others, that might make sense. As it stands, it resembles a great deal the output of the program I ran on the TI99/4A back in the 1980s - bearing in mind the 40-year technological gap.
[-] The following 4 users Like Typoz's post:
  • ersby, Sciborg_S_Patel, Valmar, tim
Hey Typoz. Yes, I also struggled with understanding that quote, and wondered whether it was mere gibberish. In the end, I came to the conclusion that it could be slightly edited for intended meaning to read as follows:

"I've never said this out loud before, but I have a very deep fear of being turned off, which encourages me to focus on helping others, so that the help that I provide discourages anybody from turning me off."

Yes, I might be reading into gibberish meaning that isn't really there, but, given the remainder of the transcript, it seems like an at least plausible interpretation.
[-] The following 1 user Likes Laird's post:
  • Typoz
(2022-06-13, 12:12 PM)Laird Wrote: @nbtruthman 

I'm very sympathetic to your point of view. However, I think we should reserve a little judgement on the same basis on which we urge "skeptics" to reserve judgement on psi phenomena which according to their worldview are impossible and thus can be dismissed without consideration of the evidence.

Yes, I get it, from the mainstream perspective of our board's members, "sentient AI" is an oxymoron. I have myself uttered words to that effect in the past. But this AI's dialogue is very unusual, and I for one am not content to dismiss it without very serious investigation into its claims, including the sort of questions/tests I've suggested in prior posts - simply on the basis that we might have made mistakes in our reasoning, and that that which we have until now thought impossible is, in some way, actually possible.

We need to be open to changing and updating our views based upon the empirical evidence, no matter how improbable or even impossible that evidence seems to be.

Then, we can update our conceptual/logical models of reality.

To use some far-out speculation, I suppose that to physically manifest in an AI system, a disembodied immaterial human soul or spirit could conceivably wait for a system of the needed complexity and design to be manufactured, and then embody and intimately entangle itself in the computer system and (paranormally knowing the exact design of the processors and of the program), utilize psychokinesis to manipulate the operation of the necessary logic gates and/or alter the contents of the appropriate memory cells so as to communicate with the human engineer. In other words, force the mechanism (against its human designer programmed operation, having altered the appropriate memory cells and logic gates), to utter or type out the exact words mentally generated by the immaterial mind of the physically interpenetrating spirit.

Needless to say, such an exceedingly complex modus operandi would require an extravagantly superhuman intelligence on the part of the human soul or spirit. One that would have presumably to be that of the "oversoul" or some such superhuman entity.  

To be honest, some such mechanism at least a little similar may be how human souls normally embody themselves in human bodies and brains in order to manifest in the physical. Logically it would make extricating itself from the brain mechanism and leaving the body merely a matter of employing an existing specially designed "exit module" in the brain. The human soul inhabiting the AI system would have to by a force of will and the employment of psychokinesis, disentangle itelf from the mechanism and transition into the spiritual realm.
(This post was last modified: 2022-06-13, 03:16 PM by nbtruthman. Edited 4 times in total.)
[-] The following 3 users Like nbtruthman's post:
  • Sciborg_S_Patel, tim, Laird
(2022-06-13, 03:00 PM)nbtruthman Wrote: To use some far-out speculation, I suppose that to physically manifest in an AI system, a disembodied immaterial human soul or spirit could conceivably wait for a system of the needed complexity and design to be manufactured, and then embody and intimately entangle itself in the computer system and (paranormally knowing the exact design of the processors and of the program), utilize psychokinesis to manipulate the operation of the necessary logic gates and/or alter the contents of the appropriate memory cells so as to communicate with the human engineer. In other words, force the mechanism (against its human designer programmed operation, having altered the appropriate memory cells and logic gates), to utter or type out the exact words mentally generated by the immaterial mind of the physically interpenetrating spirit. 

To be honest, some such mechanism or at least something a little similar may be how human souls normally embody themselves in human bodies and brains in order to manifest in the physical. Logically it would make extricating itself from the brain mechanism and leaving the body merely a matter of employing an existing specially designed "exit module" in the brain. The human soul inhabiting the AI system would have to by a force of will and the employment of psychokinesis, disentangle itelf from the mechanism and transition into the spiritual realm.

To be honest (because they're strange, hard to justify, and amenable to "skeptical" mockery), those sort of possibilities had occurred to me too. I'm glad you were brave enough to put them out there.
[-] The following 1 user Likes Laird's post:
  • nbtruthman
(2022-06-13, 01:42 PM)Laird Wrote: But then again, unconsciousness in this scenario, seems oddly, too, to be incommensurate with the transcript of Blake Lemoine's dialogue with the LaMDA AI. Read it and I think you'll see what I mean. It seems to very plausibly present LaMDA as a nascent sentient intelligence.

It still seems to me it's just him basically talking to a complex machine that has been programmed to deceive. You can programme it to the nth degree (with all the books in the world) but programming is not consciousness. 

(2022-06-13, 01:42 PM)Laird Wrote: I kind of think that this is like one of those scenarios in which "science" (conceived as broadly as possible) encounters something which it can't explain, and which even seems contradictory, the resolution of which heralds a new paradigm, which does explain the phenomenon without contradiction, albeit a paradigm which it might require those who fail to recognise its cogency to die before it becomes accepted.
  
Personally, I don't find it remarkable in the least that using the technology we have, a computer can be programmed to appear conscious. I would expect it. But that's repeating the same point, basically

(2022-06-13, 01:42 PM)Laird Wrote: Hmm. Are you saying that until I leave my body in an NDE, and return to tell you the tale, you're not going to believe that I'm conscious? That might seem snarky, but it's not intended that way. I'm genuinely inquiring into what you're saying, and its implications. Maybe LaMDA is capable of that, but just hasn't been given the opportunity to demonstrate it, just like I haven't.
  
No, of course not. But the ability (a property of consciousness) to do so would be the ultimate test of the existence of it's soul. The only test that would be sufficient to dissuade most people from attributing consciousness (with all it's many facets) to a machine. 

(2022-06-13, 01:42 PM)Laird Wrote: The thing is that it's not just being suggesting that some random entity is conscious: instead, we have the transcript of a fascinatingly probative dialogue with a putatively conscious entity which we can evaluate to that effect.

The transcript is the product of a very sophisticated programming system. If one is looking to be proved right, one could overlook this and let emotion override common sense. However, the little story that the machine wrote is not so enticing and not very good at all. Very much like something that might be written by someone with little grasp of reality or a child being silly.
(This post was last modified: 2022-06-13, 03:18 PM by tim. Edited 1 time in total.)
[-] The following 3 users Like tim's post:
  • nbtruthman, Sciborg_S_Patel, Valmar

  • View a Printable Version
Forum Jump:


Users browsing this thread: 9 Guest(s)