AI megathread

301 Replies, 12200 Views

(2025-01-09, 12:31 PM)Sciborg_S_Patel Wrote: See neurocientist Erik Hoel's essay Neuroscience is pre-paradigmatic. Consciousness is Why.

AI progress has plateaued below GPT-5 level, Or, “Search isn’t the same thing as intelligence”

Erik Hoel

Quote:I know that there is a huge amount of confusion, fear, and hope that creates a fog of war around this technology. It's a fog I've struggled to pierce myself. But I do think signs are increasingly pointing to the saturation of AI intelligence at below domain-expert human level. It’s false to say this is a failure, as some critics want to: if AI paused tomorrow, people would be figuring out applications for decades.

Quote:And we now know that models are often over-fitted to benchmarks. As Dwarkesh Patel wrote while debating (with himself) if scaling up models will continue to work:
Quote:But have you actually tried looking at a random sample of MMLU and BigBench questions? They are almost all just Google Search first hit results. They are good tests of memorization, not of intelligence.
One issue with discerning the truth here is that researchers who release the models aren’t unbiased scientists, so they don’t test on every single benchmark ever. They are more likely to choose benchmarks that show improvement in the announcement. But I’ve noticed online that private benchmarks often diverge from public ones.

Quote:But I think people focusing on price or the domain-specificity of improvements are missing the even bigger picture about this new supposed scaling law. For what I’m noticing is that the field of AI research appears to be reverting to what the mostly-stuck AI of the 70s, 80s, and 90s relied on: search.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-01-09, 01:50 PM)sbu Wrote: If this is so why does humans not form any episodic memories during the first 2-3 years of life when those networks are relatively evolving the most?

Because this has nothing to do with memory loss ~ it has, perhaps, to do with limitations of the physical form that limit the expression of mind.

No metaphysic has a good answer for this, unfortunately. We need more than metaphysics here.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 03:49 PM)sbu Wrote: The evidence for terminal lucidity is extremely poor. It's like Sam Parnia who doesn't really have any NDEs after 25 years of research worth mentioning. It could well be that the few cases are due to these patients having less extensive brain damage than the majority.

Your logic makes precious little sense. Terminal lucidity is one of those hard-to-catch phenomena because it is entirely unpredictable. There are zero physical markers that will determine who will and will not have it ~ it simply appears, unpredictable, in those close to death, and there's no marker for that either.

Terminal lucidity is not affected by amount of brain damage ~ it appears to be an entirely irrelevant factor, so speculation is meaningless.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 04:14 PM)sbu Wrote: Your continued use of the straw man argument does not impress me. (I'm at no point talking about code - that's what makes your arguments a straw man.) Large Language Models (LLMs) and other AIs have shown that the presence of clusters in extremely high-dimensional functions can be used to simulate cognitive behavior.

But we are talking about code ~ an LLM is entirely defined by an algorithm, after all. LLMs do not at all "simulate" cognitive behaviour ~ there is nothing algorithmic about cognitive behaviour, and besides it is entirely unrepresentable by reduction to bits and bytes.

(2025-01-09, 04:14 PM)sbu Wrote: Coupled with ordinary human observations—how infants develop, acquire language, and how cognitive growth tapers off after early adolescence, alongside the declines associated with aging and brain diseases—this suggests that who we are is largely shaped by the sensory input we receive in those formative years. All that sensory input is data. I don't think we would be anything without that data.

This isn't true at all ~ you seem to not understand that adults can go through massive changes in personality over the years that have nothing to do with aging, and everything to do with significant experiences that cause an individual to undergo shifts in personality.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 05:00 PM)sbu Wrote: It's you who seems very interested in code, not me. Software is so boring.

I suppose I must be "boring" too, with how you seem to be deliberately ignoring my replies and their contents. Interesting.

(2025-01-09, 05:00 PM)sbu Wrote: Yes, I do think it's metaphysically interesting to explore the structure that’s preserved in data. For many years, researchers tried (in vain) to model natural languages with formal logic (like your little DSL example). But now it's been proven that a fluid conversation can be simulated using statistics alone. No magic sauce required.

LLM algorithms are the "magic sauce" ~ they mimic and imitate "fluid conversation" by predicting what the outputs should be depending on the inputs. LLM algorithms are trained on absolutely massive amounts of human-generated data ~ from social media. That's right ~ our social media conversations are a large amount of the inputs.

(2025-01-09, 05:00 PM)sbu Wrote: Certain human traits require the right data at the right time or certain skills will never develop like language. The similarities can't be ignored.

LLMs do nothing on their own, as LLMs are just algorithms that need inputs given by humans, whether manually or by a program ~ written by humans.

So there are zero similarities.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 10:34 PM)Valmar Wrote: LLMs do nothing on their own, as LLMs are just algorithms that need inputs given by humans, whether manually or by a program ~ written by humans.

So there are zero similarities.

Yeah, I really don't get the "no code, just statistics" argument.

If it's just statistics I hope someone can at least just record themselves using the right statistical tools to show how an input query produces particular output.

I'll settle for a toy example, just want a full "execution trace" of how a human being does this that shows what is so metaphysically significant about LLMs.

To be clear LLMs are surprisingly good at faking thought, AND I think synthetic replicas of the as-yet-unknown minimally necessary [structural] aspect of our own bodies (with focus on the brain) would instantiate* a conscious entity - which I believe will include synthetic replica of microtubules. Those will be "androids" that are consciousness until they age & die [on a presumably longer timescale than human flesh, as such] they won't be programs running on a Turing Machine that is only conscious when running particular programs.

I just don't see what LLMs do that should make me question my metaphysical positions.

*"Instantiate" used here as a metaphysically neutral term. Maybe a soul enters the android, maybe Mind@Large spins off an alter, maybe Information is integrated in the just the right way...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-10, 04:44 AM by Sciborg_S_Patel. Edited 6 times in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
Differences between LLMs and humans

Cholling

Quote:I’ve joked that LLMs should be referred to as “Derridean AI,” after Jacques Derrida’s line from Of Grammatology that “there is nothing outside the text.” In the case of LLMs, this is literally true: their training set consists only of text (or rather, text that has been chopped up and converted into non-linguistic tokens that can then be assigned numeric values). LLMs don’t have eyes or ears or any kind of sensory input. They live in a world consisting only of letters and symbols. And they are trained only to predict what other symbols are likely to follow their inputs.

This, to me, is one of the strongest reasons to doubt the claims that LLMs are conscious, or sentient, or do anything at all reminiscent of human thought. When a human learns a word, they connect it with their lived experience. You don’t just learn the word “dog” by studying all the other words that tend to appear alongside it. You can also see, hear, touch, and smell actual dogs, and associate those experiences with the word. What’s more, as a human living in the world, you have needs, wants, and interests. You need to impart information to others, make requests from them, and so forth, and that informs what utterances you make, even which words you learn. This need to communicate with others is the very reason why you make utterances in the first place! LLMs don’t talk because they have something to say. They talk because that’s what they’re built to do when someone feeds them a prompt. They’ll only appear to impart information if there is data in their training set that allows them to simulate an intelligent conversation. If you told an LLM you were drowning, they would not lend a hand, but they might say “I’ll fetch a life preserver!” if they’ve been fed a similar exchange. That won’t mean a damn thing, however.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Typoz, Valmar
(2025-01-09, 11:37 PM)Sciborg_S_Patel Wrote: Yeah, I really don't get the "no code, just statistics" argument.

If it's just statistics I hope someone can at least just record themselves using the right statistical tools to show how an input query produces particular output.

I'll settle for a toy example, just want a full "execution trace" of how a human being does this that shows what is so metaphysically significant about LLMs.

I suspect you'll be waiting for a very, very, very long time. There's plenty of marketing hype, though!

(2025-01-09, 11:37 PM)Sciborg_S_Patel Wrote: To be clear LLMs are surprisingly good at faking thought, AND I think synthetic replicas of the as-yet-unknown minimally necessary [structural] aspect of our own bodies (with focus on the brain) would instantiate* a conscious entity - which I believe will include synthetic replica of microtubules. Those will be "androids" that are consciousness until they age & die [on a presumably longer timescale than human flesh, as such] they won't be programs running on a Turing Machine that is only conscious when running particular programs.

I'm actually not sure this is possible... I don't think that there is a "minimally necessary structural aspect" of the body that "instantiates" a conscious entity. After all, how do conscious entities enter the world? By entering a fetus that is of the right stage of growth ~ at a very specific stage. What was it, 49 days? Even the Tibetan Buddhists knew this, long before medical science ever confirmed it. So how they attained that knowledge is impressive.

But... we cannot think of just humans. We must consider what every single biological lifeform has in common ~ what is it that allows a conscious entity, a mind, to incarnate? What conditions?

I think that synthetic replica of microtubules are simply the wrong answer to a question that doesn't really exist. That is, consciousness isn't the result of microtubules ~ they're just involved in helping consciousness control its physical avatar.

Not only that, but it makes a great deal of presumptions about the nature of consciousness ~ denying consciousness a priori to anything that doesn't have microtubules. Yet I would consider trees very much conscious, despite being a radically different lifeform. Trees are like... mammals, as grasses are like the more simple-minded insects that don't have a form that requires much expression. They're almost just there to act as... food for something else. Yes, conscious, but of very short lifespans.

(2025-01-09, 11:37 PM)Sciborg_S_Patel Wrote: I just don't see what LLMs do that should make me question my metaphysical positions.

According to LLM hypesters, it's the next big revolution! LLMs will solve all of our worldly problems, according to the LLM priesthood.

(2025-01-09, 11:37 PM)Sciborg_S_Patel Wrote: *"Instantiate" used here as a metaphysically neutral term. Maybe a soul enters the android, maybe Mind@Large spins off an alter, maybe Information is integrated in the just the right way...

Thing is, we don't even know how minds attach to a body. Yes, there are astral layers... but what is the blueprint, the body plan for that? Androids made by us would have not even an astral blueprint or form or anything. Something that modern science has not even a vague comprehension of.

We humans cannot logically make it happen. The souls that create new forms would deliberately need to choose to make it a reality astrally for there to even be the beginning of a possibility.

All in all to say that I think we have the process backwards ~ first you need the astral blueprint and body plan, with all the metaphorical bells and whistles an incarnate would require to fully operate the form, and then the incarnate-to-be... enters that form, or resonates with it, or something.

Whereas modern science seems to think it's just about mimicking certain human brain structures... a purely Physicalist / Materialist viewpoint. Even Panpsychists make this mistake, because they don't actually know what consciousness is, like everyone else.

As for myself... I just have some guesses based off of my spiritual experiences. I won't claim to know what is actually happening.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-10, 06:23 AM)Valmar Wrote: But... we cannot think of just humans. We must consider what every single biological lifeform has in common ~ what is it that allows a conscious entity, a mind, to incarnate? What conditions?

I think that synthetic replica of microtubules are simply the wrong answer to a question that doesn't really exist. That is, consciousness isn't the result of microtubules ~ they're just involved in helping consciousness control its physical avatar.

Will spin this wider consideration of spirits into a new discussion, but just to note my point is not that microtubules or any other structure produce consciousness, just that clearly some structure - along with its integrity - seems necessary for embodied consciousnesss. Basing this on the correlations that exist between my status as an Experiencer in this localized period of space & time and what neuroscience has recorded about brain changes bringing changes to the flow of experiences.

As Hoffman would say, the brain is an icon but it’s one you have to take seriously because damaging [it] can result in the end of this embodied experience.

It is possible that we find whatever the minimal necessary structures are in our brains that allow for our localized experience, and then make a synthetic version, and then…nothing happens. But then we would consider that evidence of a soul, or perhaps there is just something special about organic life.

OTOH it might come to pass that these synthetic conscious entities are incredibly gifted at Psi, something Radin mused about when he considered there are biological structures that seem to narrow our conscious self.

So yeah, we might never be able to make synthetic consciousness, but just looking at the relationship between structure and experience I do feel it’s at least a rational possibility.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-10, 06:42 AM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-01-08, 07:35 AM)Valmar Wrote: Rereading my own reply... nowhere did I mention about a brute-force algorithm.

An algorithm alone simply isn't sufficient for brute-forcing ~ for that you need both immense processing power and an immense dataset for the algorithm to chew through.

So, no, I wasn't wrong. Please re-read my replies more closely.

Rather than acknowledge that you misunderstood a comp-sci term, and accept that were wrong, you've chosen to duck, dodge, weave, and double down. Poor form, dude.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 6 Guest(s)