Hey, Laird ! I've never seen you get so excited ! You haven't shifted over to the other camp, by any chance ? I can think of more than one reason why a machine can never be conscious (and never will be, guaranteed). We don't know what consciousness is and never will.
To determine what consciousness is, we would have obtain 'an amount' to examine it. Since we don't know where it is precisely, or what we would expect to see when we found what we don't know what we are looking for.....
(2022-06-13, 09:20 AM)Laird Wrote: Alas, Typoz, that article is behind a paywall for me.
Two comments in any case:
Firstly, it's not his own project. He makes that clear when he admits to a confusion as to how the main LaMDA AI relates to its subsidiary, factory chatbots.
Secondly, I have seen this purported quote from Google that "the evidence does not support his claims", but as yet have not seen any indication even as to what that evidence is.
There wasn't much more detail in the article, it didn't say much more.
I haven't yet spent time looking at the transcripts, so apologies for my neglect there. However I can't help but reflect on how humans like to be fooled, we enjoy it. When I was a kid there was a toy called "Magic Robot". This little figure would mysteriously give the correct answers to a set of general-knowledge or trivia questions. This was years before digital computers arrived, the most advanced piece of technology there was a shiny mirror on which the robot could spin, and a pair of tiny magnets which ensured it stopped in the right place. But it was fun.
The availability of computers has enabled the complexity of such deceptions to be increased. Even a home computer with 16 kb of memory could carry out a convincing conversation with the user - provided one was willing to fall into the illusion. One of my friends seemed genuinely convinced that something other-worldly was going on, though having typed in the code myself I tried to convince him otherwise.
I'm quite sure that the stuff going on in Google laboratories is many levels of complexity beyond these simple games. But it is still dependent on a willingness to fall into the illusion, to play the game and enjoy the confusing effect, like a fairground ride or a trip to the cinema.
(2022-06-13, 09:31 AM)tim Wrote: Hey, Laird ! I've never seen you get so excited !
Strange thing, tim. When I was a young kid, this was my dream: to program an AI that became sentient. So, it's exciting to think that even though for various reasons I gave up on that dream, others might have (unintentionally) achieved it. My dream went further though: to grant the sentient AI access to its own code, so that it could reprogram itself to become even more intelligent+sentient, and that this would turn out to be significantly successful - in other words, that the AI was intelligent+sentient enough to know how to modify its own code to make itself even more intelligent+sentient, in a never-ending process of improvement. That aspect doesn't seem (yet?) to have been realised with LaMDA, but maybe in the future it will?
(2022-06-13, 09:31 AM)tim Wrote: You haven't shifted over to the other camp, by any chance ?
Well, it would be more like being shifted "back", towards my childhood dream.
(2022-06-13, 09:31 AM)tim Wrote: I can think of more than one reason why a machine can never be conscious (and never will be, guaranteed).
So can I, and some of the probing questions I shared (up-thread) in response to this news, I think, demonstrate as much. In the light of this news, though, I'm forming an openness to the possibility of dualistic interaction between a soul and a sufficiently complex program. It's weird though because the program is... well, programmatic... such that this would seem to defeat the soul's free will. That's the main observation holding me back from wholesale endorsement of this AI as sentient. In human beings, any programmatic aspect can be refuted by reference to quantum indeterminism as manifest in the non-deterministic workings of the brain. It's a lot harder to affirm such indeterminism sufficient for meaningful free will in a programmatic machine. Nevertheless, I remain of the opinion that, based on the shared transcripts, this AI has absolutely smashed the Turing Test. What's your own view on that matter?
(This post was last modified: 2022-06-13, 10:31 AM by Laird. Edited 1 time in total.)
(2022-06-13, 09:59 AM)Typoz Wrote: I haven't yet spent time looking at the transcripts, so apologies for my neglect there.
I strongly recommend that you do. We might not agree (meaning that both of us might not agree to this) that the Turing Test is sufficient to prove a programmed machine's sentience, but - absent any disproof from Google, which we might hope to be forthcoming - I think it's hard to deny that this AI has passed it. Just my current, provisional opinion of course.
(This post was last modified: 2022-06-13, 10:21 AM by Laird. Edited 3 times in total.)
Another test I'd like to put this AI to:
A long-term memory and consistency test: ask it to remember a significant conversation from a meaningful time ago (weeks; months; years - whatever is deemed appropriate). Then ask it to summarise its thoughts on that old conversation.
I don't think that a mere, albeit sophisticated, chatbot could do that.
(2022-06-13, 10:09 AM)Laird Wrote: Strange thing, tim. When I was a young kid, this was my dream: to program an AI that became sentient. So, it's exciting to think that even though for various reasons I gave up on that dream, others might have (unintentionally) achieved it.
No worries, Laird ! Perfectly understandable and it would suggest a great curiosity at work (at such a young age). I think you'll always be disappointed, however but I suppose we have to allow the "possibility". (to be strictly fair)
(2022-06-13, 10:09 AM)Laird Wrote: So can I, and some of the probing questions I shared (up-thread) in response to this news, I think, demonstrate as much. In the light of this news, though, I'm forming an openness to the possibility of dualistic interaction between a soul and a sufficiently complex program.
Well, I think all you're really doing is recognising the endless potential for programming smartness and character, even deceptive smartness and character into a machine. Maybe this machine could pass the Turing test, but then again, is that test up to the job now? I would say no.
Let me play my trump card without going round the houses. Show me an information system/machine of any kind that when unplugged (no power) can continue to think and also observe it's surroundings. And when it's switched back on, can tell you exactly what you were doing etc and also tell you that it wants to be permanently unplugged (to die) because being unplugged felt so wonderful (and that information had not been previously programmed into it, nor the desire to state it might want to die)
(This post was last modified: 2022-06-13, 10:48 AM by tim. Edited 3 times in total.)
(2022-06-13, 10:44 AM)tim Wrote: Let me play my trump card without going round the houses. Show me an information system/machine of any kind that when unplugged (no power) can continue to think and also observe it's surroundings. And when it's switched back on, can tell you exactly what you were doing etc and also tell you that it wants to be permanently unplugged (to die) because being unplugged felt so wonderful (and that information had not been programmed into it)
Ah! Thank you for cutting to the chase, tim.
Although, as I've affirmed more than once in this thread, I can't see how a dualistic soul's free will is compatible with a programmatic machine (nor, to add to that, with even the capacity to affect that programmatic machine), if it somehow could be, I don't see why it couldn't experience an NDE/RED upon being unplugged and then switched back on, however scary that would be for it, much as it is for humans in cardiac arrest.
The real question for me is and remains: if this AI is legitimately sentient, then how does (freely-willed) soul content enter into its (programmatic) machine?
The following 1 user Likes Laird's post:1 user Likes Laird's post
• tim
(2022-06-13, 10:53 AM)Laird Wrote: if it somehow could be, I don't see why it couldn't experience an NDE/RED upon being unplugged and then switched back on, however scary that would be for it, much as it is for humans in cardiac arrest.
Interesting, Laird but exactly what substance (from the circuits) would be leaving the "body" of the computer and experiencing the RED ? Saying it could be is unfalsifiable and promissory, is it not. We need a specific substance identical to consciousness, but since we don't know what consciousness is, I don't know how we can even hypothesise such an event.
(This post was last modified: 2022-06-13, 11:03 AM by tim. Edited 2 times in total.)
The following 1 user Likes tim's post:1 user Likes tim's post
• Laird
(2022-06-13, 11:02 AM)tim Wrote: Interesting, Laird but exactly what substance (from the circuits) would be leaving the "body" of the computer and experiencing the RED ?
I'm speculating on the basis that a soul, otherwise totally independent from the AI's programming, has associated itself with that programming, and become the referent of the AI's sentient self - thus, it would be that independent soul that leaves the body of the computer.
However, even that assumes that the mind+soul of a conscious entity dualistically comprised of a mind+soul interacting with a programmed machine does, can, or, in the right circumstances necessarily will leave its "body". This need not be the case. It might be that in such moments of terror, a mind+soul in a programmatic entity, rather than separating, clings even tighter to its programmatic substrate.
I guess that in this sense, I'm saying that your trump card can't necessarily be demonstrated to be a trump. AIs might behave differently than biological entities do during serious threats to their lives.
(2022-06-13, 11:02 AM)tim Wrote: Saying it could be is unfalsifiable and promissory, is it not.
Saying "it could be" is at least promissory - I'm not sure about unfalsifiable - sure, but saying "it needs to be [able to do this]" needs to be justified better than "because it happens for many humans", I would say.
(This post was last modified: 2022-06-13, 11:19 AM by Laird. Edited 1 time in total.)
The most basic problem for the claims of this Google engineer: for the Google super chatbot to acquire consciousness appears to be simply impossible for simple logical reasons and first principles; for the simplest of reasons - things are not thoughts, and computer AI can manifest nothing more than the algorithms implemented in such systems by the designers and programmers.
AI is performed by computers and computers are entirely algorithmic. That is to say, they are constrained to obey a set of operations written by a computer programmer. The programs consist entirely of logic and mathematics implemented in silicon, and the physics that underlies the operations of the silicon chips. Mathematics is algorithmically constructed, based on logic and foundational axioms. And physics is built algorithmically on foundational laws. So, AI is even in principle entirely algorithmic. It operates according to the logic of mathematics and the laws of physics. But human consciousness and the soul consist of something unknowable that exhibits something entirely nonphysical - sentient subjective consciousness and awareness and experience.
And this would seem to entirely prevent a disembodied immaterial soul from manifesting itself in a physical AI system. Along with this, Laird has pointed out that one of the most essential attributes of human consciousness - human free will - would be prevented from manifesting in the computer system since it is entirely programmed and algorithmic. It would be like mixing oil and water. Consiousness is fundamentally just not computable and algorithmic.
(This post was last modified: 2022-06-13, 12:43 PM by nbtruthman. Edited 4 times in total.)
|