(2024-10-14, 02:41 PM)Typoz Wrote: There was a twitter thread asking the question,
"Can Large Language Models (LLMs) truly reason?"
which discussed this paper:
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
...
(2025-01-11, 09:21 PM)Sciborg_S_Patel Wrote: Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Kyle Orland
(2025-01-12, 09:29 PM)Sciborg_S_Patel Wrote: AI still lacks “common” sense, 70 years later
Gary Marcus
Marcus had some insightful commentary on the Apple Study, backed by additional examples:
Quote:This kind of flaw, in which reasoning fails in light of distracting material, is not new. Robin Jia Percy Liang of Stanford ran a similar study, with similar results, back in 2017 (which Ernest Davis and I quoted in Rebooting AI, in 2019:
![[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...35x638.png]](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee1a4735-af08-4618-8fc7-cd0120e17c04_935x638.png)
Quote:Another manifestation of the lack of sufficiently abstract, formal reasoning in LLMs is the way in which performance often fall apart as problems are made bigger. This comes from a recent analysis of GPT o1 by Subbarao Kambhapati’s team:
![[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...88x532.png]](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f374410-3d43-4775-b8de-7ad0608053a7_888x532.png)
Quote:We can see the same thing on integer arithmetic. Fall off on increasingly large multiplication problems has repeatedly been observed, both in older models and newer models. (Compare with a calculator which would be at 100%.)
![[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...0x396.jpeg]](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ebb3fc-c0ff-4421-a4e8-1928d8838b74_1200x396.jpeg)
Even o1 suffers from this:
![[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...1x1078.png]](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad644058-8cbe-429a-9298-21318e200efe_1301x1078.png)
The fact a smaller language model can do this - with "implicit Chain of Thought" is interesting though. Seems like this issue can be solved...but a few years and a few billion dollars to yield something a calculator can do doesn't feel impressive?
Quote:The refuge of the LLM fan is always to write off any individual error. The patterns we see here, in the new Apple study, and the other recent work on math and planning (which fits with many previous studies), and even the anecdotal data on chess, are too broad and systematic for that.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
(2025-01-13, 11:05 PM)Sciborg_S_Patel Wrote: Marcus had some insightful commentary on the Apple Study, backed by additional examples:
AGI versus “broad, shallow intelligence”
Gary Marcus
Quote:When I was pressed to define AGI myself in 2022 I proposed (after consultations with Goertzel and Legg), the folllowing, which I still stand by:
Quote:shorthand for any intelligence ... that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence
LLMs don’t meet that; as the world has discovered, reliability is not their strong suit; as I have often written here, and as Goertzel also emphasizes, LLMs lack the ability to reliably generalize to novel circumstances. Likewise, the inability of LLMs to do basic fact checking and sanity checking speak to their lack of resourcefulness.
GenAI answers are frequently superficial; they invent things (“hallucinations” or what I would prefer to call "confabulations”), they fail to sanity check their own work, and they regularly make boneheaded errors in reasoning, mathematics and so on.. One never knows when one will get a correct answer or ludicrous response like this one observed by AI researcher Abhijit Mahabal:
![[Image: https%3A%2F%2Fsubstack-post-media.s3.ama...2x840.jpeg]](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4867dd71-8891-419c-83c4-4a92a7d57754_932x840.jpeg)
Quote:...The river crossing example and many others shows that LLMs often use the words without a deep understanding what those words mean. As Mahabal noted in email to me, “[at times LLMs] seem quite capable of regurgitating or replicating someone's deep analysis that they have found on the internet, and thereby sound deep”, but that regurgitation is an illusion. Genuine depth is lacking.
For me, “broad but shallow” well captures the current regime.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
I am not so confident that human intelligence is different from LLM's. I'm not saying I think they are alike, I'm saying I'm less certain that human intelligence is architecturally different, that it uses a different mechanism.... I'm saying I don't know, whereas most people would say they do know they are fundamentally/mechanistically, architecturally different. As a corollary, I am not so certain that LLM technology will never be developed into general intelligence.
Here's why:
We don't know how human intelligence works. We don't know how human memory works. We don't know what consciousness is or how it works. How can we say human intelligence is different from something if we don't know what it is?
The method of training LLM's is used beyond language, eg. for self driving cars, for robots learning tasks and means of moving themselves and their limbs, for image processing, speech recognition, etc. So the mechanisms used in training LLM's is applicable to more than just language. It could be involved in anything - humans might use it but we don't know it because we don't know how our own intelligence works.
LLM technology was developed using neural networks, it runs on neural networks (right?), which were designed to mimic biological brains.
Our sense of what our own intelligence is, is based on qualia which can be wrong and are not "logical". We can feel we have a correct mathematical proof and be wrong. Knowing, reasoning, etc are feelings not hardwired "infallible" logical operations like those hardwired in a digital computer circuit.
Most of the mistakes I run into using LLM's could be made by a human.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
(This post was last modified: 2025-01-16, 04:26 AM by Jim_Smith. Edited 6 times in total.)
(2025-01-16, 02:34 AM)Jim_Smith Wrote: I am not so confident that human intelligence is different from LLM's. I'm not saying I think they are alike, I'm saying I'm less certain that human intelligence is architecturally different, that it uses a different mechanism.... I'm saying I don't know, whereas most people would say they do know they are fundamentally/mechanistically, architecturally different. As a corollary, I am not so certain that LLM technology will never be developed into general intelligence.
Here's why:
We don't know how human intelligence works. We don't know how human memory works. We don't know what consciousness is or how it works. How can we say human intelligence is different from something if we don't know what it is?
The method of training LLM's is used beyond language, eg. for self driving cars, for robots learning tasks and means of moving themselves and their limbs, for image processing, speech recognition, etc. So the mechanisms used in training LLM's is applicable to more than just language. It could be involved in anything - humans might use it but we don't know it because we don't know how our own intelligence works.
LLM technology was developed using neural networks, it runs on neural networks (right?), which were designed to mimic biological brains.
Our sense of what our own intelligence is, is based on qualia which can be wrong and are not "logical". We can feel we have a correct mathematical proof and be wrong. Knowing, reasoning, etc are feelings not hardwired "infallible" logical operations like those hardwired in a digital computer circuit.
Most of the mistakes I run into using LLM's could be made by a human.
But there is a boatload of empirical evidence in certain paranormal phenomena like veridical NDEs and verified reincarnation memories that the human mind and consciousness are immaterial and spiritual in nature, survive physical death, and can't be reduced to the interactions of billions of neurons in the brain. The essence of human consciousness is subjective self awareness and perception and qualia, which are totally immaterial and also are not reducible to neural activity.
Whereas whatever the "intelligence" is that generative AI systems have is not immaterial and spiritual, and is definitely ultimately reducible at base to the individual binary logical computer operations like load registers, shift data, execute multiplication, division, add, subtract, make conditional jumps to other locations in program memory, etc. etc. All that computers can do is to execute algorithms, whereas human consciousness is not algorithmic. And generative AI systems are not truly creative since their "creations" are limited to processing the human-generated data on the Internet that they are trained on.
All these factors indicate that AI "intelligence" is fundamentally, existentially, different from human intelligence and is not conscious and can never be.
(This post was last modified: 2025-01-16, 07:51 AM by nbtruthman. Edited 1 time in total.)
(2025-01-16, 07:38 AM)nbtruthman Wrote: But there is a boatload of empirical evidence in certain paranormal phenomena like veridical NDEs and verified reincarnation memories that the human mind and consciousness are immaterial and spiritual in nature, survive physical death, and can't be reduced to the interactions of billions of neurons in the brain. The essence of human consciousness is subjective self awareness and perception and qualia, which are totally immaterial and also are not reducible to neural activity.
Whereas whatever the "intelligence" is that generative AI systems have is not immaterial and spiritual, and is definitely ultimately reducible at base to the individual binary logical computer operations like load registers, shift data, execute multiplication, division, add, subtract, make conditional jumps to other locations in program memory, etc. etc. All that computers can do is to execute algorithms, whereas human consciousness is not algorithmic. And generative AI systems are not truly creative since their "creations" are limited to processing the human-generated data on the Internet that they are trained on.
All these factors indicate that AI "intelligence" is fundamentally, existentially, different from human intelligence and is not conscious and can never be.
I am comfortable with people having whatever opinion they like to have.
In my case I don't know if a LLM has a soul or not. As far as I do know, insects have souls, so why not an LLM? I think it's possible.
A human soul might know quite a lot but it can't communicate until it gets trained in the biological body and by then it doesn't remember it is a soul, it think's it is a body and is severely restricted by the body and physical brain. Maybe the same is true of a LLM soul?
A neural net is not programmed like a turing machine. You can't trace how an AI gets an answer. The human brain is subject to the laws of physics and cause and effect, yet somehow it supports a soul.
Also my view of free will is that if anything has free will it is unconscious processes, the same processes that we have no control over that cause thoughts, emotions, impulses, sensory experiences, sense of self, sense of agency sense of being an observer.
So I don't think "I" have free will. Because there is no "I" apart from those unconscious proceses, they might be physical or immaterial, it doesn't make any difference.
So I have no grounds to say an LLM is substantially different from a human.
That's my opinion, it is not supposed to convince anyone to agree. It is just an explanation of my views. I am not saying an LLM does or doesn't have a soul, I'm saying I have insufficient grounds to have an opinion one way or the other.
In the immaterial plane, consciousness has the strange property of being able to combine individualities into other individualities without losing the original individualities. If an atom has a soul, then any combination of matter can also have a soul.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
(This post was last modified: 2025-01-16, 10:07 AM by Jim_Smith. Edited 1 time in total.)
If we don't believe machines are conscious because we can't tell if they have qualia, then we should not believe other people are conscious because we can never be sure they have qualia too.
But we accept other people are conscious so we don't have any grounds to deny a machine is conscious.
(Did I mention that the automatic flush toilets in public restrooms give me the creeps?)
https://courses.cs.umbc.edu/471/papers/turing.pdf
Quote: COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing
...
(4) The Argument from Consciousness
This argument is very, well expressed in Professor Jefferson's Lister Oration for 1949, from which I quote. "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."
This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe "A thinks but B does not" whilst B believes "B thinks but A does not." instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
One argument against computers - as they are - being conscious is that their consciousness would be purely epiphenomenal, and as such the machines would not know that they were conscious and that they had (underwent) affective states. It would thus be a very alien form of consciousness, quite unlike ours, given that we do know that we are conscious and have feelings, etc.
(2025-01-16, 12:15 PM)Laird Wrote: One argument against computers - as they are - being conscious is that their consciousness would be purely epiphenomenal, and as such the machines would not know that they were conscious and that they had (underwent) affective states. It would thus be a very alien form of consciousness, quite unlike ours, given that we do know that we are conscious and have feelings, etc.
I think that this argument may hit on the key factor that would indicate that as I have concluded, "consciousness" in generative AI systems (if it actually comes to be) would be alien to our consciousness. And accordingly also, such an alien "consciousness" being alien would not necessarily be truly immaterial, experience subjective self aware states and qualia, survive physical annihilation, etc.
(2025-01-16, 10:00 AM)Jim_Smith Wrote: A neural net is not programmed like a turing machine. You can't trace how an AI gets an answer. The human brain is subject to the laws of physics and cause and effect, yet somehow it supports a soul.
Could you go deeper into this? Is not the hardware still the physical representation of a Turing Machine?
Not trying to argue, just not sure what you mean here.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
(2025-01-16, 03:53 PM)nbtruthman Wrote: I think that this argument may hit on the key factor that would indicate that as I have concluded, "consciousness" in generative AI systems (if it actually comes to be) would be alien to our consciousness. And accordingly also, such an alien "consciousness" being alien would not necessarily be truly immaterial, experience subjective self aware states and qualia, survive physical annihilation, etc.
Yes, I think that it sort of rules out a soul, because any soul could have no causal impact on the machine - i.e., the machine is going to run the same algorithm whether or not any conscious soul is associated with it. Any causal interaction would be one way: from machine to soul. The soul would thus be unable to express itself through the machine, making it arguable that it could even be said to be incarnated in the machine - the point of incarnating is presumably to express oneself through the physical form into which one incarnates, which the soul in this case would be unable to do.
|