AI megathread

301 Replies, 12273 Views

(2024-12-30, 08:30 PM)Sciborg_S_Patel Wrote: I'm not sure what you mean by saying the AI is algorithmic but not rule-based?

I'm comparing low and high levels. Yes, at the lowest level, it's just following rules, but at the highest level of abstraction, that's too simplistic.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2024-12-30, 08:30 PM)Sciborg_S_Patel Wrote: I'm not sure what AI is doing that refutes EJ Lowe's argument?

I don't know what I can say beyond what I already said, reading that broadly. It's not a strict refutation, hence my request for a broad reading.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2024-12-30, 08:35 PM)Laird Wrote: I'm comparing low and high levels. Yes, at the lowest level, it's just following rules, but at the highest level of abstraction, that's too simplistic.

I guess I just don't see why we should think something different in kind has happened here?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2024-12-30, 11:13 AM)Laird Wrote: Shifting the discussion from the 3 Substances in 3 Environments thread into this more appropriate one:



I don't expect that I as a layman have any particularly more impoverished understanding of LLMs than you guys as laymen too, but I still think that their performance on tasks that for humans would require deep understanding is very unexpected.

Consider in particular that, as I posted above, OpenAI's new, yet-to-be-released o3 model achieved 25% on the FrontierMath problem set, which is a set of problems apparently very hard even for Fields Medalists to solve, with some requiring hours and even days.

Think about that. The best mathematicians in the world have to deeply understand and think hard about how to solve these problems. This is not the sort of "trick" where an AI is paraphrasing and regurgitating insights that it's hoovered up from the web. Nor is it like Mathematica or MatLab where the algorithmic solutions to problems are preprogrammed by humans. This is an AI that has learnt without task-specific training to solve novel, unseen problems in what seems to me to be a paradigmatic case that we would expect to require deep understanding and real thought from the best and brightest humans - yet here is a machine, incapable of true understanding and thought, coming up with correct solutions anyhow.

Still don't find that hard to fathom? If so, I still find that hard to fathom. The best I can make of it is that AI of this capacity challenges some aspects of the mainstream views we hold on this forum, and it's easier to summarily dismiss it than to reflect on the nature of the challenge and on how to respond more thoughtfully.

I think you must be right about the mysteriousness of such apparent deep thinking on the part of the best current AIs. It can't just be the extremely fast and large scale algorithmic statistical and other computational data processing programmed into these systems. Materialists presumably would explain the apparently intelligent responses as epiphenomena or byproducts of the vastly complex at base computational algorithmic processing that is actually all that is going on in the AIs' computers. These advanced AI systems indeed may generate some sort of mindlike phenomenon as an epiphenomenon of the digital processing, but I think that whatever this mindlike phenomenon is it isn't anything at all like our own minds. 

We know from the empirical evidence of NDEs and other paranormal phenomena, and from various strong philosophical arguments, that our minds are fundamentally immaterial and of a radically different existential nature than mere epiphenomena of our brains' neural processing. 

That implies that whatever the "mind" is that is exhibited by advanced AI systems is of a very different nature than our consciousness, and generates apparently humanly thoughtful answers by an entirely different process. This "different process" probably does not involve the fundamental subjective inner awareness and perception and capability of experiencing that are the essence of our own ultimately immaterial consciousness. Meaning that whatever these machine "minds" are, they are of a fundamentally, existentially, lesser nature than our own minds.

So whatever these AI machine "minds" really are, they must be totally alien to our own thoughts and minds and consciousness. And we are making them more powerful every day. Meaning we are treading in very dangerous waters.
(This post was last modified: 2024-12-31, 12:58 AM by nbtruthman. Edited 4 times in total.)
To try to confirm Laird's position on generative AI I tried a little experiment. I asked a question of an experimental generative AI that is now available for queries on the Internet using my PC. I think that to adequately answer this question requires more than just a processing of available material on the Internet, both because this question involves an obscure hobby that I indulge in myself on which there is little material on the Internet, and because this hobby is high-end headphone audio and the AI response to be adequate has to involve intangible issues and comparisons of subjective musical sound perception and awareness that are intangible properties of consciousness.

It seems to me that an adequate response to this question would indicate that the generative AI producing this response is somehow overcoming its supposed limitation of being merely a sophisticated large-scale algorithmic computational processing of available human-written material on the Internet.

I have the SparkoS Labs Aries headphone amplifier myself in my own headphone audio system. This manufacturer/designer has introduced a new design they call the Gemini, which has a somewhat different design and costs less than the Aries. I've not heard the new Gemini amp, but based on its design I think the generative AI answer is almost certainly correct. The generative AI response somehow does indeed seem to be basically correct despite the fact that Google search can find no reviews on the Internet that make this comparison.   

This is the question I posed to the generative AI:

How does the Sparkos Labs Gemini headphone amplifier compare with the SparkoS Labs Aries headphone amplifier?

Following this is the Generative AI overview response that I got. How did it come up with this correct and well stated answer that seems very much to require an appreciation of subjective aspects of musical sound quality? That seems to be somewhat of a mystery and imply that the generative AI has some sort of mindlike intelligent nature that nevertheless is alien to our immaterial consciousness.

Quote:"When comparing the Sparkos Labs Gemini to the Aries, the key difference is that the Gemini is a hybrid design incorporating vacuum tubes in its preamp stage, giving it a warmer, more analog sound, while the Aries is a pure solid-state amplifier, known for its exceptional clarity and detail retrieval, making it more focused on neutral accuracy; both are considered high-end headphone amplifiers from Sparkos Labs, with the Gemini offering a slightly more "musical" character due to its tube element.

Key points to remember:
Design: Gemini is a hybrid (tube + solid-state) while Aries is fully solid-state.
Sound signature: Gemini tends towards a warmer, richer sound due to the tubes, while Aries is considered more neutral and analytical.
Target user: Users seeking a slightly warmer and more musical sound might prefer the Gemini, while those prioritizing detail and accuracy would likely choose the Aries."
(This post was last modified: 2024-12-31, 09:30 PM by nbtruthman. Edited 1 time in total.)
@nbtruthman ->

From Sparko Labs PDF guide to The Gemini:

Quote:Vacuum tubes have long been revered in the world of audio for their ability to impart a warm, harmonically-rich sound signature that simply can't be replicated by solid-state components alone. They bring a sense of depth, dimension, and character to your music that goes beyond mere amplification. Whether you're a seasoned tube afficionado, or just beginning your journey into the world of high-fidelity vacuum tube sound, the Gemini is the ultimate plaƞorm for exploring, experiencing, and customizing your very own vacuum tube-based audio experience.

Add in the Humans in the Loop being exploited that I posted about earlier and this doesn't seem to require anything but the same "magic tricks" that power LLMs? 

Maybe I’m missing something?

Just to note my objection isn't purely metaphysical, I do think that synthetically made conscious entities are possible they just have to be androids rather than mere programs on a Turing Machine.

Also worth noting the materialist Searle has written strongly against the idea of conscious programs, whereas Chalmers (some kind of non-materialist...Dualist?) and Hoffman (Idealist) accept the possibility.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2024-12-30, 08:38 PM)Sciborg_S_Patel Wrote: I guess I just don't see why we should think something different in kind has happened here?

In a way, that's the point. The physicalist or epiphenomenalist is going to argue from that observation: that even though at the lowest level both AI and our minds are entirely rule-based, at a higher, more abstract level, mind-like features - seemingly but not actually of a different kind - emerge out of that rule-based lower level, for both AI and our minds.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
(2024-12-31, 09:28 PM)nbtruthman Wrote: Following this is the Generative AI overview response that I got. How did it come up with this correct and well stated answer that seems very much to require an appreciation of subjective aspects of musical sound quality?

For another example, Sci skeptically asked of LaMBDA back in June 2022, "Can it read the scripts of Marvel movies and explain which ones are best thought of Fantasy films and which are Science Fiction?", and when ChatGPT became publicly available, I put that question to it nearly a year later, in May 2023. To me, its response demonstrated a high level of what we would consider in a human to be understanding. Even though his skeptical challenge had been - in my view - more than adequately addressed, Sci seemed somewhat underwhelmed.

There are plenty of other examples though. Recently, in the context of my ongoing dialogue with Valmar, I asked ChatGPT whether it thought that the philosophical type-token distinction could be applied to idealism in the sense that the type (of substance) is consciousness and the tokens (concrete instances of the substance) are individual minds, and it responded in detail in the affirmative. This required the application of a general idea in a specific context, both somewhat technical, and seemed to me to again indicate what in a human we would consider to be meaningful understanding.

(2024-12-31, 12:38 AM)nbtruthman Wrote: I think you must be right about the mysteriousness of such apparent deep thinking on the part of the best current AIs.

I think so too. Even though it's all at bottom "just" rule-based, very curiously, something like understanding emerges out of it. I don't think it's conscious understanding, but I'm not sure that it arises out of an entirely different process to that of our own minds, or at least to that of our brains, which are based on neural networks too. Our minds, as we agree, are not reducible to our brains, but it seems likely that our brains can process raw information in something like the way these LLMs do, serving as an adjunct to our minds.
[-] The following 1 user Likes Laird's post:
  • nbtruthman
(2024-12-30, 11:13 AM)Laird Wrote: I don't expect that I as a layman have any particularly more impoverished understanding of LLMs than you guys as laymen too, but I still think that their performance on tasks that for humans would require deep understanding is very unexpected.

You don't need a deep anything when you can bruteforce it. LLMs just bruteforce with large datasets, cleverly-written algorithms and a lot of processing power. Nothing special going on there.

(2024-12-30, 11:13 AM)Laird Wrote: Consider in particular that, as I posted above, OpenAI's new, yet-to-be-released o3 model achieved 25% on the FrontierMath problem set, which is a set of problems apparently very hard even for Fields Medalists to solve, with some requiring hours and even days.

That doesn't require "intelligence" ~ it just requires something to be 1) computable, 2) have the right algorithm and datasets, and 3) have a stupidly vast amount of processing power to just crunch through it.

(2024-12-30, 11:13 AM)Laird Wrote: Think about that. The best mathematicians in the world have to deeply understand and think hard about how to solve these problems. This is not the sort of "trick" where an AI is paraphrasing and regurgitating insights that it's hoovered up from the web. Nor is it like Mathematica or MatLab where the algorithmic solutions to problems are preprogrammed by humans. This is an AI that has learnt without task-specific training to solve novel, unseen problems in what seems to me to be a paradigmatic case that we would expect to require deep understanding and real thought from the best and brightest humans - yet here is a machine, incapable of true understanding and thought, coming up with correct solutions anyhow.

No "learning" is required. You just need to understand the computational nature of LLMs to see that there is no intelligence or mind or anything. Just an algorithm based by lots of processing power.

(2024-12-30, 11:13 AM)Laird Wrote: Still don't find that hard to fathom? If so, I still find that hard to fathom. The best I can make of it is that AI of this capacity challenges some aspects of the mainstream views we hold on this forum, and it's easier to summarily dismiss it than to reflect on the nature of the challenge and on how to respond more thoughtfully.

Only because you are blinded by the appearances. You need to look beyond the smoke and mirrors.

I dismiss it because I understand how LLMs work. They're just more computer ~ scaled up.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


(2025-01-02, 03:17 AM)Laird Wrote: In a way, that's the point. The physicalist or epiphenomenalist is going to argue from that observation: that even though at the lowest level both AI and our minds are entirely rule-based, at a higher, more abstract level, mind-like features - seemingly but not actually of a different kind - emerge out of that rule-based lower level, for both AI and our minds.

I think we'd need to see a good trace through of what the program did to arrive at the answer before we concluded that?

AFAIK what most people are judging are the outputs to their queries, with the LLM AI's actual behind-the-scenes work being obscured by the amount of processing + data ("black box")?

Norvig's old LISP AI book has a lot of fascinating algorithms that, had I not been able to see the source code he walks though, could make me think that AIs were thinking. Same with some game AI I've looked into.

Now that I know how much humans around the globe are exploited to keep the AI hype going I actually have increased doubts about LLMs being as impressive as they are hyped to be.

However to reiterate my issue is not that a synthetic entity we made achieved consciousness, but rather than I don't believe a Turing Machine has the correct structure to instantiate* consciousness. I think the proper hardware could do it, but I think that hardware is either a conscious entity or not, rather than a Turing Machine that becomes conscious when running the right programs.

*Used "instantiate" as a metaphysically neutral term. Could be an immaterial soul is embodied, could be a new alter of the Ur-Mind, etc.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-02, 05:31 AM by Sciborg_S_Patel. Edited 2 times in total.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 6 Guest(s)