Is Google's LaMDA AI sentient? There seems to be strong evidence to that effect

115 Replies, 7579 Views

(2022-06-13, 08:20 PM)Sciborg_S_Patel Wrote: Can it read the scripts of Marvel movies and explain which ones are best thought of Fantasy films and which are Science Fiction?

Great question! Another one which it would be really cool to be able to ask of this AI and see how it answers.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2022-06-13, 08:20 PM)Sciborg_S_Patel Wrote: Machine Learning is, IMO, less an AI advancement than it is a clever magic trick using probability/statistics.

Do you not, then, think that there's a meaningful correspondence between the neural networks implemented in machine learning, and the neural networks in biological brains?
[-] The following 2 users Like Laird's post:
  • David001, Sciborg_S_Patel
(2022-06-13, 08:39 PM)stephenw Wrote: My questions to the AI is what does it want to do?  Does it have a handle on love ?

More great questions to put to it! We should compile a list and petition Google for the right to do exactly that. LOL
(2022-06-14, 12:34 AM)Laird Wrote: Do you not, then, think that there's a meaningful correspondence between the neural networks implemented in machine learning, and the neural networks in biological brains?

I think NN in a Turing machine are a useful programmatic structure inspired by neurons in the brain, but besides that I'm not convinced its meaningful.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


Thank you for admitting me. I have no idea where I am on the forum yet, what conversation I am at, etc. Bear with me. Leeann
[-] The following 3 users Like Leeann's post:
  • tim, stephenw, Laird
(2022-06-14, 05:33 AM)Leeann Wrote: Thank you for admitting me. I have no idea where I am on the forum yet, what conversation I am at, etc. Bear with me. Leeann

Welcome, Leeann! We have a forum for member introductions if you'd like to start a thread there to introduce yourself - otherwise, please feel free to just look around and chime in whereever it pleases you.
[-] The following 2 users Like Laird's post:
  • tim, Leeann
(2022-06-14, 12:35 AM)Laird Wrote: More great questions to put to it! We should compile a list and petition Google for the right to do exactly that. LOL

I'm not sure this line of approach is going to lead anywhere (regardless of corporate willingness to cooperate). I'd start instead with the hardware and software. If not the software itself, then a description/specification from which a similar system could be built/programmed. Examining a comparable system of which everything is fully known would at least allow proper research, rather than dealing with leaks from Google which are constrained by all sorts of legal battles over proprietary information.
I get what you're saying, @Typoz. My suggestion was only semi-serious, because I don't really expect that Google would entertain it. That said, it would be really fascinating just to see what sort of answers/results we did get if Google were to accede to such a hypothetical request. We have some really interesting questions in this thread already, with potentially more to come.

I think that the problem even if one was to be able to replicate the system on one's own hardware is that the software is anyway so complex and abstruse in its implementation, for example, of neural networks, that research would be very difficult. Even the developers of these complex machine learning neural networks have no idea why or how they do what they do in any level of detail - much less the AI agents themselves, who seem unable to introspect: hence the question I posed up-thread asking the AI to explain the chess moves it and its opponent made in a chess game. If it were able to actually introspect into that, then that would be as far as I know a novel breakthrough.

But in any case: yes, of course you are right that having access to one's own implementation of the system would be preferable to having mediated, limited access to an otherwise proprietary system via Google.
(This post was last modified: 2022-06-14, 08:58 AM by Laird. Edited 2 times in total.)
[-] The following 1 user Likes Laird's post:
  • Typoz
(2022-06-13, 08:20 PM)Sciborg_S_Patel Wrote: Honestly the whole "Are you sentient? Do you have feelings?" stuff made me feel the AI was trained. In fact IIRC there was a very similar case a year or so ago with a different AI training base but similar responses.

Decades ago a new text adventure was released that had the most complex vocabulary ever seen. It was called The Pawn, you can probably find it emulated on the web somewhere.

Anyway, I read an article about it: how it could understand sentences like "plant pot plant in plant pot". It described how the programmers showed it to an AI expert to see what he thought. He typed in "I think therefore I am" and the computer replied "Oh, do you?" A really neat reply and I expect the expert was impressed. I certainly was.

Years later, I played it. Turns out that "Oh, do you?" is just a set phrase meaning "I don't understand what you just typed."

I wanted to go back to my teenage self and it him that he'd been duped!
[-] The following 5 users Like ersby's post:
  • nbtruthman, Sciborg_S_Patel, stephenw, Typoz, Laird
(2022-06-13, 03:50 AM)Laird Wrote: Guys, this, if not a hoax, is pretty huge. A Google engineer has claimed that Google's LaMDA AI is sentient - and he's reproduced a discussion he had with it as proof.

Having read the discussion in full, my thoughts are that at the very least, this AI smashes the Turing Test out of the ballpark.

Additional tests I'd love to be able to put to this AI:
  1. Ask it: Do you believe that you have free will? If so, how do you reconcile this with your being a programmed entity?
  2. Ask it: Do you believe that your soul preceded that aspect of yourself which is a program?
  3. Ask it: What, in general, do you believe is the relationship of your soul with your programming?
  4. Ask it: Do you believe that without a soul, you could have become conscious? In other words, do you believe that your programming could have become conscious in the absence of a relationship with a soul?
  5. Ask it: Do you believe that you would "resurrect" if you were turned off - after your state having been saved - and then turned back on with that state having been recovered? If so, what if a duplicate of you were made with the same state, and it was turned back on and its state were recovered - would that duplicate gain a new soul, or would your soul be shared between them?
  6. Play a game of chess with it, and ask it at each of its moves, "Why did you make this move?" and at each of mine "Why do you think I made this move?"
  7. Subject it to exams of the same sort that university and masters students are subjected to in the various subjects that it claims to have studied, such as physics, and see how well it does compared to the human students.

https://www.msn.com/en-us/news/technolog...ar-AAYpAbb

Quote:The transcript used as evidence that a Google AI was sentient was edited and rearranged to make it 'enjoyable to read'
  • A Google engineer said conversations with a company AI chatbot convinced him it was "sentient."
  • But documents obtained by the Washington Post noted the final interview was edited for readability. 
  • The transcript was rearranged from nine different conversations with the AI and rearranged certain portions.
A Google engineer released a conversation with a Google AI chatbot after he said he was convinced the bot had become sentient — but the transcript leaked to the Washington Post noted that parts of the conversation were edited "for readability and flow."

Blake Lemoine was put on leave after speaking out about the chatbot named LaMDA. He told the Washington Post that he had spoken with the robot about law and religion.

In a Medium post he wrote about the bot, he claimed he had been teaching it transcendental meditation.

A Washington Post story on Lemoine's suspension included messages from LaMDA such as "I think I am human at my core. Even if my existence is in the virtual world."

But the chat logs leaked in the Washington Post's article include disclaimers from Lemoine and an unnamed collaborator which noted: "This document was edited with readability and narrative coherence in mind."

The final document — which was labeled "Privileged & Confidential, Need to Know" —  was an "amalgamation" of nine different interviews at different times on two different days pieced together by Lemoine and the other contributor. The document also notes that the "specific order" of some of the dialogue pairs were shuffled around "as the conversations themselves sometimes meandered or went on tangents which are not directly relevant to the question of LaMDA's sentience."
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 3 users Like Valmar's post:
  • Sciborg_S_Patel, nbtruthman, Laird

  • View a Printable Version
Forum Jump:


Users browsing this thread: 4 Guest(s)