(2023-05-29, 08:20 PM)Typoz Wrote: This step seems way out of sequence to me. If the proposal is to give rights to conscious entities this planet is filled with vast numbers of different kinds, from birds of the air, animals of the land to creatures of the oceans, lakes and rivers. I really think we should attend to these first or we will look foolish as well as devalue our own selves and our higher aspirations.
Oh I am not saying we should be in any rush to create said synthetic conscious entities, treatment of all kinds of life should be vastly more improved for all living beings before that. [Also maybe we just shouldn't make synthetic life because it might upset some Natural Balance.]
However I think there is some difficulty in the language of rights - we don't really "grant" them in the sense of giving a gift but rather acknowledge a moral standard.
So *if* someone put together synthetic life that we had a reasonable reason to think was conscious - maybe it matches some quantum biology structure that we found was necessary for human minds - then I think we would have to grant these robots/androids/whatever the rights we grant to beings of our sentience/sapience level.
As to whether the structure is what makes the Ur Mind split into a new alter, the Classical God stamp intellect into the creation of a new soul, an existing soul chooses the robot/android to inhabit, the particles of panpsychic consciousness have been arranged in just the right way, etc etc...
...The robot/android will probably just join in the debate with us, but maybe provide some new insight...and hopefully not enslave us...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
(This post was last modified: 2023-05-29, 09:32 PM by Sciborg_S_Patel. Edited 2 times in total.)
(2023-05-29, 06:06 PM)Sciborg_S_Patel Wrote: I think if we emphasize ARTIFICIAL we can accept simulations of our own - or alternative - mental work as valid AI.
As for a consciousness that is sentient/sapient, I actually am not against this working or extending rights to such an entity provided there is a clear reason to think said entity is conscious.
For me that "clear reason" would have to be discovering the minimal set of brain-body correlations that allow for consciousness. If that minimal set was created in, say, silicone + metal, and this actually made the entity turn on and act sentinent/sapient...well I would be hard pressed to deny it rights.
A program running on a Turing Machine that makes said Turing Machine conscious, OTOH, seems nonsensical.
I think the mere existence of any given set of (silicon and metal) brain-body correlations can always in principle be ascribed to extremely clever programming of a hyper-sophisticated Turing machine to mimic human behavior. This human mimic machine would just be limited in its convincingness by the extent of the database of actual human behaviors.
It seems to me that the only conclusive way of testing for true consciousness would be in using esp to actually establish a telepathic link with the AI system. In order for the human psychic to experience this, he/she would then have to be communicating with some sort of other mind, however different it was and however different was its embodiment in the physical.
(2023-05-29, 10:40 PM)nbtruthman Wrote: I think the mere existence of any given set of (silicon and metal) brain-body correlations can always in principle be ascribed to extremely clever programming of a hyper-sophisticated Turing machine to mimic human behavior. This human mimic machine would just be limited in its convincingness by the extent of the database of actual human behaviors.
It seems to me that the only conclusive way of testing for true consciousness would be in using esp to actually establish a telepathic link with the AI system. In order for the human psychic to experience this, he/she would then have to be communicating with some sort of other mind, however different it was and however different was its embodiment in the physical.
What about octopuses, dolphins, elephants?
I get this is very much into sci-fi territory, heck it recalls Star Trek Next Gen's episode The Measure of a Man...but I am not sure telepathy is fair measure here. I don't think Diana Troy could read Data with her empathic powers?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
(2023-05-29, 01:39 PM)Laird Wrote: How does one get access to ChatGPT 4.0? I've just similarly tested the version of ChatGPT at chat.openai.com, which it self-identifies as being based on the GPT-3.5 architecture. I tested it on a deceptively simple problem, like you did in the OP for version 4.0, and it didn't get it right after several attempts, although it wasn't totally out of the ballpark. I'd be interested in trying on version 4.0. Maybe you can feed the problem description in and see how you go? Here's the relevant part of the transcript:
@ Laird I never got back to you on this “challenge” - the answer was that until today all incremental improvements of chatGpt failed.
However the model they released yesterday, the o1, got it right on the initial prompt.
The following 1 user Likes sbu's post:1 user Likes sbu's post
• Laird
(2024-09-13, 08:27 PM)sbu Wrote: @Laird I never got back to you on this “challenge” - the answer was that until today all incremental improvements of chatGpt failed.
However the model they released yesterday, the o1, got it right on the initial prompt.
That's very impressive. It took me and the others in the thread in which the problem occurred several attempts to get it right. Unless ChatGPT o1 had been trained on that thread itself or otherwise seen the problem before, that mathematical problem-solving ability seems to me to demonstrate a meaningful level of (artificial) intelligence. Thanks for getting back to me.
The following 1 user Likes Laird's post:1 user Likes Laird's post
• sbu
(2024-09-14, 01:49 AM)Laird Wrote: That's very impressive. It took me and the others in the thread in which the problem occurred several attempts to get it right. Unless ChatGPT o1 had been trained on that thread itself or otherwise seen the problem before, that mathematical problem-solving ability seems to me to demonstrate a meaningful level of (artificial) intelligence. Thanks for getting back to me.
Early testing indicates that the new model is a significant improvement. I also tested it with a few differential equations that its predecessor couldn't solve, and it successfully solved them. It will be interesting to follow how this evolves in the next few years.
The following 1 user Likes sbu's post:1 user Likes sbu's post
• Laird
I came across a few YouTube videos on ChatGPT o1 the other day, and it looks very impressive. It seems to be able to simulate the sort of step-by-step thinking that humans use when solving problems, to the point of being able to describe in real time the steps it's following. I don't know anything about the underlying technology, but if it's still largely neural network-based, then it looks like the debate about whether neural networks are sufficient to reach artificial general intelligence or whether additional ad hoc modules are required is weighing heavily in favour of "Yes, they suffice".
The following 1 user Likes Laird's post:1 user Likes Laird's post
• sbu
(2024-09-28, 09:43 AM)Laird Wrote: I came across a few YouTube videos on ChatGPT o1 the other day, and it looks very impressive. It seems to be able to simulate the sort of step-by-step thinking that humans use when solving problems, to the point of being able to describe in real time the steps it's following. I don't know anything about the underlying technology, but if it's still largely neural network-based, then it looks like the debate about whether neural networks are sufficient to reach artificial general intelligence or whether additional ad hoc modules are required is weighing heavily in favour of "Yes, they suffice".
While I believe AI within a decade will surpass humans in every specialized cognitive task—such as math, coding, or driving—a truly self-learning AI remains out of reach for now. Ultimately, I don't think consciousness can be reduced to mere computations. It seems to be a fundamental phenomenon intrinsically tied to life. So eventually ais will reach their limits, but obviously we are just at the beginning of this journey.
(2024-09-28, 09:43 AM)Laird Wrote: I came across a few YouTube videos on ChatGPT o1 the other day, and it looks very impressive. It seems to be able to simulate the sort of step-by-step thinking that humans use when solving problems, to the point of being able to describe in real time the steps it's following. I don't know anything about the underlying technology, but if it's still largely neural network-based, then it looks like the debate about whether neural networks are sufficient to reach artificial general intelligence or whether additional ad hoc modules are required is weighing heavily in favour of "Yes, they suffice".
To me it's a large subject, and an interesting question requiring a lengthy response, whether "artificial general intelligence" has any relevance to the issue of whether such a system is also conscious. Whether such an intelligence if it really has come into being is really conscious or not is one of the most important issues that apply to this area.
From my standpoint, for any non-quantum computer-based AI, (even one actually acheiving status of "artificial general intelligence" by its performance of a very accurate to being indistinguishable mimicing of human thought processes and expressions), this AI system is separated by an existential gulf from ever acheiving consciousness.
I wrote a little about the details of why this is the case in another post in this thread, and to avoid recomposing my argument again it seems relevant to this issue to quote it with some additions, as follows (the original at https://psiencequest.net/forums/thread-a...8#pid58848 ) :
Quote:Binary or other number system computation is the basic underlying mechanism driving all non-quantum computers. This involves large-scale digital processing chips some containing entire processors doing absolutely nothing but executing the current machine instruction and then jumping to the next, which execution amounts to accomplishing an elemental arithmetic operation like add or subtract or multiply, or an "if-then do this or otherwise do this other thing" Boolian logic algorithm.
This overall logic is basically of the sort where a logic or arithmetic computation is carried out and the result determines what path the computer will then take in terms of executing memory instructions. For instance for an elementary arithmetic algorithm, "if the A register value is > value Y in variable memory address Z then jump to memory address W and execute that instruction. If it is < or =, execute instruction at address V". Or the computation that was carried out may be, rather than arithmetic, a Boolian logic tree consisting of a chain or tree composed of elements like "if X is 1 (true) and Y is 0 (false) then go to location U and execute that next elementary decision logic kernel".
No matter how many processors are working together in concert on the problem and how great the sophistication of the programming (with a generative AI system for instance), at any moment during the AI system's operation this just described sort of computer micro-code logic and arithmetic process is what is really going on in the foundational core of the AI computers executing the programs.
This is a physical logical process mechanized by microtransistors and diodes and involving algorithms, and is in an entirely existentially different realm than consciousness which is immaterial. Therefore AIs simply can't develop consciousness. Therefore even if some highly advanced generative AI system finally actually exhibits artificial general intelligence, we will know it is mimicing human thought processes, but it is absolutely not conscious.
And it seems to me that unfortunately there is no valid test of this, because there would seem to be no practical limits to how accurately such an AI system could mimic a human being in its communications.
I agree with you both that AGI doesn't entail consciousness given its basis in digital computation - it simulates conscious thinking processes rather than consciously thinking (hence the "artificial").
I'm curious though to follow up on this by sbu:
(2024-09-28, 05:34 PM)sbu Wrote: a truly self-learning AI remains out of reach for now.
What do you mean by "truly self-learning" and why do you think it remains out of reach for now? When and why will that change?
|