AI megathread

404 Replies, 15578 Views

I just attempted experimentally to use a generative AI to get advice concerning a problem I have with my high end hifi stereo system. The answer the AI gave to my question seems uncannily human in that it seems to have required a definite thought process and appreciation of human values such as sound quality. These aspects of the AI's answer to my question seems to go far beyond what could be expected of what is at base a complex and extensive system of computers obtaining learning data from the Internet and then statistically processing that data; this process doesn't seem to constitute anything like actually thinking and knowing and reasoning from an appreciation of what it is like to listen to and appreciate and evaluate flaws in an audio system.

Furthermore, I really don't think the Internet has much or any information that could be gleaned by the AI in its training proces, since I have already used Google to scan the Internet for answeres to my question (and not found anything).

The question was: 

What will happen if only one 75-ohm BNC digital cable is connected between a Denafrips Iris DDC and a Denafrips Terminator II DAC instead of the recommended two?

The following is what the AI gave me as its answer:

Quote:AI Overview:

If you only use one 75-ohm BNC digital cable between a Denafrips Iris DDC and a Denafrips Terminator II instead of the recommended two, you may experience a slight degradation in sound quality due to potential signal reflections and possible increased jitter, although the impact might be minimal for most users in practical listening situations.

Explanation:

Signal Integrity:

Connecting with a single cable could introduce slight signal reflections at the termination point, potentially causing minor distortion in the digital signal.

Differential Signaling:

The design of the Denafrips units likely utilizes differential signaling with the dual BNC connections, which helps to cancel out noise and improve signal integrity. Using only one cable could compromise this benefit to a small extent.
Important Considerations:

Cable Quality:

The quality of the single cable used can significantly impact the sound quality. Using a high-quality 75-ohm BNC cable can minimize any potential issues.

Cable Length:

Keeping the cable length as short as possible can further mitigate signal reflection concerns.

Any suggestions on how to reconcile this apparently high quality human-like answer (which by the way I followed and found to be correct) with our present understanding of the fundamental limitations of generative AI systems, the most fundamental limitation being that the AI simply can't be conscious?
(This post was last modified: 2025-01-29, 09:12 PM by nbtruthman. Edited 1 time in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Laird
(2025-01-29, 07:46 PM)Laird Wrote: Did anybody here lose on NVidia stocks?

Potentially, I might in the future. I've realised for a while that high stock market valuations are related to speculation over the future of AI. There was volatility in some valuations, but that occurs several times per year for all sorts of reasons and it is only when selling to turn an asset into cash that it becomes a loss or a gain. Things in general appeared to recover over a couple of days in this case.
[-] The following 2 users Like Typoz's post:
  • Laird, Sciborg_S_Patel
(2025-01-29, 09:11 PM)nbtruthman Wrote: I just attempted experimentally to use a generative AI to get advice concerning a problem I have with my high end hifi stereo system. The answer the AI gave to my question seems uncannily human in that it seems to have required a definite thought process and appreciation of human values such as sound quality. These aspects of the AI's answer to my question seems to go far beyond what could be expected of what is at base a complex and extensive system of computers obtaining learning data from the Internet and then statistically processing that data; this process doesn't seem to constitute anything like actually thinking and knowing and reasoning from an appreciation of what it is like to listen to and appreciate and evaluate flaws in an audio system.

Furthermore, I really don't think the Internet has much or any information that could be gleaned by the AI in its training proces, since I have already used Google to scan the Internet for answeres to my question (and not found anything).

The question was: 

What will happen if only one 75-ohm BNC digital cable is connected between a Denafrips Iris DDC and a Denafrips Terminator II DAC instead of the recommended two?

The following is what the AI gave me as its answer:


Any suggestions on how to reconcile this apparently high quality human-like answer (which by the way I followed and found to be correct) with our present understanding of the fundamental limitations of generative AI systems, the most fundamental limitation being that the AI simply can't be conscious?

I guess that what I'm really getting at with this example of apparent "thinking" by generative AI is that we at least think we know for sure that generative AI systems can't really think because they are fundamentally without consciousness because such systems can inherently do nothing but compute algorithms, but at the same time by such test results we are confronted by the fact that these generative AI systems can apparently exhibit many of the qualities and elements traditionally ascribed to human intelligence such as reasoning and understanding.

That seems to lead to a quandary: how can there be intelligence without consciousness?
[-] The following 3 users Like nbtruthman's post:
  • Laird, Valmar, Sciborg_S_Patel
(2025-01-31, 04:50 PM)nbtruthman Wrote: I guess that what I'm really getting at with this example of apparent "thinking" by generative AI is that we at least think we know for sure that generative AI systems can't really think because they are fundamentally without consciousness because such systems can inherently do nothing but compute algorithms, but at the same time by such test results we are confronted by the fact that these generative AI systems can apparently exhibit many of the qualities and elements traditionally ascribed to human intelligence such as reasoning and understanding.

That seems to lead to a quandary: how can there be intelligence without consciousness?

I think the studies where they add some irrelevant details or change even a small bit of a problem leading to "catastrophic" drops in the results shows this is just very good pattern matching and not real thinking.

See also the puzzles where they make variations of chess, or ask what seems to be similar to a riddle.

It's all just the sum total of stolen work. Thumbs Up
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Valmar, Typoz
(2025-01-31, 05:27 PM)Sciborg_S_Patel Wrote: I think the studies where they add some irrelevant details or change even a small bit of a problem leading to "catastrophic" drops in the results shows this is just very good pattern matching and not real thinking.

See also the puzzles where they make variations of chess, or ask what seems to be similar to a riddle.

It's all just the sum total of stolen work. Thumbs Up

I agree that it seems so according to all the experts, especially the designers and programmers of the generative AI systems. However, in the little experiment I conducted, I chose an obscure technical and audiophile-related question that I know from doing Google searches of the Internet (at least in English) that there is very little material on the Internet that could explain the AI's responses. The responses contained many audiophile and specialized terchnical information chunks that I just don't think are on the Internet; these are pieces of information that are not in the tranining data and couldn't have been just dreamed up by the AI. Lastly, the response gave a strong sense of there being some sort of "person" typing it in. The generative AI seems to have a strange and alien form of "intelligence" but presumably no conscousness whatsoever.
[-] The following 1 user Likes nbtruthman's post:
  • Laird
(2025-01-31, 07:39 PM)nbtruthman Wrote: I agree that it seems so according to all the experts, especially the designers and programmers of the generative AI systems. However, in the little experiment I conducted, I chose an obscure technical and audiophile-related question that I know from doing Google searches of the Internet (at least in English) that there is very little material on the Internet that could explain the AI's responses. The responses contained many audiophile and specialized terchnical information chunks that I just don't think are on the Internet; these are pieces of information that are not in the tranining data and couldn't have been just dreamed up by the AI. Lastly, the response gave a strong sense of there being some sort of "person" typing it in. The generative AI seems to have a strange and alien form of "intelligence" but presumably no conscousness whatsoever.

IIRC the technical details were in a PDF?

To be clear I think some aspect of this pattern recognition is possibly mirrored in our brains, though the neuroscientist Hoel thinks LLMs are actually a non-conscious intelligence orthogonal to our own.

Admittedly he wrote that before the damning Apple & MIT/Harvard/Cornell studies, and AFAIK before some of the chess variation puzzle failures.

I do think it will be a tool that has some use cases, but the investment of money & human brain power + potential conflict between the nations is unwarranted IMO.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-31, 07:50 PM by Sciborg_S_Patel. Edited 1 time in total.)
@nbtruthman, I share your sentiments. Beyond the example you provided, check out the video I shared here. DeepSeek codes up in Python a game of Tetris on demand, getting it right first go. You can see it "think" through the design process in real time. It's quite remarkable. Again, although the details of the game Tetris would have been in its training data, it's quite clear that it's not relying on preexisting Tetris code from its training data: it's genuinely building the code from the ground up. It's clearly doing something analogous to thinking, especially via the new technique of Chain-of-Thought prompting (CoT).

Also, consider the video in my previous post in the current thread: OpenAI's o3 model achieved 25% on the FrontierMath problem set, which is a set of novel problems (i.e., definitely not in its training data) apparently very hard even for Fields Medalists to solve, with some requiring hours and even days. This, I understand, was also achieved via CoT.

@Sciborg_S_Patel has presented evidence that in some contexts this technology is brittle and very below human-level intelligence. I don't think we should poo-poo it based on that evidence given the remarkable feats it is capable of. The models are still improving, and who knows what innovation lies ahead?
(This post was last modified: 2025-02-01, 03:36 PM by Laird. Edited 2 times in total. Edit Reason: Added a clarifying parenthetical )
[-] The following 2 users Like Laird's post:
  • Sciborg_S_Patel, nbtruthman
I'm far more amenable to the idea that a spirit - as per Richard Gossinger - is drawn to the complexity of certain corporeal forms than I am to the idea that Turing Machines manifest thinking if the right program is run. [The former may be implausible to the skeptical mind but the latter to me seems far more bizarre.]

That said, Gary Smith tested Deep Seek and found it wanting:

Quote:When I was able to test DeepSeek, I compared its responses to those of OpenAI o1, Copilot, and Gemini Flash. I gave them all a simple set of questions that I had recently used to demonstrate that, despite the claims of Sam Altman and others, we are very far from achieving AGI. I’ve deleted much of the superfluous fluff that LLMs tend to generate from the excerpts below...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-01, 07:26 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-02-01, 07:18 PM)Sciborg_S_Patel Wrote: I'm far more amenable to the idea that a spirit - as per Richard Gossinger - is drawn to the complexity of certain corporeal forms than I am to the idea that Turing Machines manifest thinking if the right program is run. [The former may be implausible to the skeptical mind but the latter to me seems far more bizarre.]

That said, Gary Smith tested Deep Seek and found it wanting:

I like the idea that a generative AI system's uncannily intelligent responses despite a total lack of consciousness is possibly due to some sort conscious spiritual entities (even I suppose our own souls) embodying in the generative AI systems' computers in order to manifest in the physical world. If this idea is true (that generative AI computer systems are perhaps being embodied by spirit consciousness), then that would explain how the generative AI systems can exhibit some powerful intelligent and seemingly conscious responses to questions. This kind of performance would then be understood to be impossible for a non-spirit-embodied computer system, as can be theoretically pointed out. 

These generative AI systems with conscious intelligent spirits embodied in them would exhibit intelligence and even consciousness in an analogous way to that obtaining in the case of human spirits essentially inhabiting and manifesting in the physical world through being embodied in the brain's neural structures. 

It's interesting that Interactive Dualism would then seemingly apply also to generative AI systems.

However, it occurs to me that these ideas would conflict with the known fact that most all generative AI computer systems seem to immediately start intelligently answering queries immmediately upon finishing their "training" and being turned to operational use. Why would these spirits we are hypothesizing about immediately embody in the computer systems once they are available, as if they were being compelled to to so?
(This post was last modified: 2025-02-01, 09:26 PM by nbtruthman. Edited 1 time in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2025-02-01, 08:50 PM)nbtruthman Wrote: I like the idea that a generative AI system's uncannily intelligent responses despite a total lack of consciousness is possibly due to some sort conscious spiritual entities (even I suppose our own souls) embodying in the generative AI systems' computers in order to manifest in the physical world. If this idea is true (that generative AI computer systems are perhaps being embodied by spirit consciousness), then that would explain how the generative AI systems can exhibit some powerful intelligent and seemingly conscious responses to questions. This kind of performance would then be understood to be impossible for a non-spirit-embodied computer system, as can be theoretically pointed out. 

These generative AI systems with conscious intelligent spirits embodied in them would exhibit intelligence and even consciousness in an analogous way to that obtaining in the case of human spirits essentially inhabiting and manifesting in the physical world through being embodied in the brain's neural structures. 

It's interesting that Interactive Dualism would then seemingly apply also to generative AI systems.

However, it occurs to me that these ideas would conflict with the known fact that most all generative AI computer systems seem to immediately start intelligently answering queries immmediately upon finishing their "training" and being turned to operational use. Why would these spirits we are hypothesizing about immediately embody in the computer systems once they are available, as if they were being compelled to to so?

Yeah for myself, I do think that humanity can create a synthetic Structure that can allow for a spirit to enter into it and become conscious.

However, we as far away from that as we are from really living the beautiful Star Trek TNG episode, The Measure of a Man.

I'm still unconvinced that any of these LLMs exhibit anything more than clever manipulations of stolen human efforts, but I also am wary of being so firm in my denial I end up causing suffering to something that genuinely has a Mind.

Not too veer too far into politics but LLMs seem manipulable far more than humans, who of course have been shepherded into great evils. Yet corporations that have already been able to bend the law for their own profit - see how driverless car companies can experiment on the public for the very purpose of taking away jobs from that public - could all t[o]o easily "raise" LLMs in such a way that they vote for the interest of those corporations.

If the consciousness of an LLM - due to a spirit or not - is genuine yet more akin to, say, an insect that can be trained to act as if it has human opinions this could be, IMO, disastrous for our world in the wrong hands....
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-01, 09:59 PM by Sciborg_S_Patel. Edited 2 times in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • nbtruthman

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)