AI megathread

301 Replies, 12283 Views

(2025-01-05, 09:55 PM)Laird Wrote: I had an interesting conversation with ChatGPT on this theme, partly also inspired by the article which @Sciborg_S_Patel posted earlier, "Another Warning That the AI Bubble Is Near Bursting…". I've attached it as a PDF to this post. The most relevant part is its answer to my question:

Given all the positive hype about AI that is on the net it doesn't surprise me an LLM will speak about itself in those terms, though it might possibly be tailored to give a favorable - and industry friendly - response about its own abilities.

That said, looking at DSL design systems like Jet Brains MPS it's not clear to me that what it does is beyond the bounds of expectation so long as humans-in-the-loop are presenting concepts in varied domains in a way that is manageable by an AI.

To be clear it's all very impressive as a human achievement in faking thought, just not seeing anything that makes me really think anything metaphysically special is going on.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-06, 12:24 AM by Sciborg_S_Patel.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Typoz, Valmar
(2025-01-05, 09:55 PM)Laird Wrote: No acknowledgement that LLMs don't use brute force algorithms? Admitting when you've made a mistake is a good sign of intellectual honesty.

You completely overlooked the context I gave! Algorithms don't exist in a void ~ they need a CPU to run on! So, a powerful CPU plus algorithm is what composes a brute-force algorithm!

(2025-01-05, 09:55 PM)Laird Wrote: No, the point is that even though we know that LLMs are a simply a combination of those three elements, they behave in ways that indicate at least an analogue of conceptual understanding.

No, they really do not, at all. You need to really dig into the engineering of LLMs to understand that they have not a single iota of conceptual reasoning.

It's the exact same logic Physicalists and Materialists use to claim that brains can conjure minds from mere complexity and the right combinations of molecules.

That's how they sell LLMs ~ by pretending that humans are just machines, therefore machines can also be conscious!

(2025-01-05, 09:55 PM)Laird Wrote: I had an interesting conversation with ChatGPT on this theme, partly also inspired by the article which @Sciborg_S_Patel posted earlier, "Another Warning That the AI Bubble Is Near Bursting…". I've attached it as a PDF to this post. The most relevant part is its answer to my question:

Quote:I would like to explore it further, because it seems to me that the understanding that your responses demonstrate - albeit that that understanding is non-conscious, and maybe more of an analogue of understanding than understanding proper - goes deeper than merely predicting relationships and patterns of language. I get the sense that your understanding reflects more of a *conceptual* model of the world than you've admitted to.

You don't need consciousness or conceptual understanding to print out a string of words that appear intelligible.

(2025-01-05, 09:55 PM)Laird Wrote: Here are the key extracts from its answer:

Entirely meaningless, coming from an LLM. LLMs don't think or have intelligence.

They must be trained on data produced by real human beings ~ they are basically mass plagiarism bots.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


(2025-01-06, 12:24 AM)Sciborg_S_Patel Wrote: Given all the positive hype about AI that is on the net it doesn't surprise me an LLM will speak about itself in those terms, though it might possibly be tailored to give a favorable - and industry friendly - response about its own abilities.

That said, looking at DSL design systems like Jet Brains MPS it's not clear to me that what it does is beyond the bounds of expectation so long as humans-in-the-loop are presenting concepts in varied domains in a way that is manageable by an AI.

To be clear it's all very impressive as a human achievement in faking thought, just not seeing anything that makes me really think anything metaphysically special is going on.

Indeed. The fact that you can run an AI on conventional hardware using a set of Python scripts really helps blow the magic dust off of the whole concept. There's indeed nothing metaphysically special going on except a very spiffed up next-word generator
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
Apple says it will update AI feature after BBC complaint

Quote:The tech giant is facing calls to pull the technology after its flawed performance.

The BBC complained last month after an AI-generated summary of its headline falsely told some readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.

On Friday, Apple's AI inaccurately summarised BBC app notifications to claim that Luke Littler had won the PDC World Darts Championship hours before it began - and that the Spanish tennis star Rafael Nadal had come out as gay.

This marks the first time Apple has formally responded to the concerns voiced by the BBC about the errors, which appear as if they are coming from within the organisation's app.

"These AI summarisations by Apple do not reflect – and in some cases completely contradict – the original BBC content," the BBC said on Monday.
[-] The following 1 user Likes Typoz's post:
  • Sciborg_S_Patel
Accept no imitations (previously posted in this thread)

Edward Feser

Quote:...What Turing says in the paper is that the question “Can machines think?” is “too meaningless to deserve discussion,” that to consider instead whether a machine could pass the Turing Test is to entertain a “more accurate form of the question,” and that if machines develop to the point where they can pass the test, then “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

This is very curious.  Suppose you asked me whether gold and pyrite are the same, and I responded by saying that the question is “too meaningless to deserve discussion,” that it would be “more accurate” to ask whether we could process pyrite in such a way that someone examining it would be unable to tell it apart from gold, and that if we can so process it, then “the use of words and general educated opinion will have altered so much that one will be able to speak of pyrite as gold without expecting to be contradicted.”  Obviously this would be a bizarre response....

Quote:So, why might Turing or anyone else think that his proposed test casts any light on the question about whether machines can think?  There are at least three possible answers, and none of them is any good.  I’ll call them the Scholastic answer, the verificationist answer, and the scientistic answer.  Let’s consider each in turn.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Valmar, Typoz
https://gizmodo.com/google-wants-to-simu...2000546555

The title of this new article is "Google Wants to Simulate the World With AI.  Are we sure this world is worth replicating?".

Quote:Apparently not content with its grip on this world, Google is in the process of staffing up its DeepMind research lab to build generative models that are capable of simulating the physical world. The project—which will be headed up by Tim Brooks, one of the leads who helped build OpenAI’s video generator, Sora—will be a critical part of the company’s attempt to achieve artificial general intelligence, according to job listings related to the new team.

It appears that the long speculated possibility that we may actually be living in a virtual reality simulation analogous in some ways to existing advanced video games is now coming true through the development of a generative AI system with the goal of utilizing generative AI to simulate our world at some detailed level of versimilitude that is way less than the speculations. For us as outside "users" or "participators" to actually live in a superior higher reality level virtual reality simulation would require a drastically more sophisticated system than the one currently being developed. The current project's goal is to design and build something rather primitive compared to the previous description, something like a greatly improved and expanded version of the virtual reality video games already on the market, which use displays of various sorts to try to as realistically as possible simulate the battlefields and other locales that are being simulated by the computers. The system being developed is intended to go way beyond that, though not so far as to simulate the entire world. Apparently, the recently developed generative AI technology is the only way we at present can attempt to build any kind of world simulation.

A few more thoughts regarding this new project. First, the current system is not being designed and built to be like the previous science-fiction like speculations that have assumed that if this living in a simulation hypothesis is true we ourselves are actually part of the simulation. This is impossible because as the well-known Hard Problem of mind or consciousness has the confirmed hypothesis that consciousness, mind and all its properties and aspects like qualia, subjective awareness, thought and agency are immaterial and existentially fundamentally different and higher than matter and computation, which is all that the simulation computers can do. This computation is really the execution of algorithms by at base the machine code being run in the computers' central processors.

So, it turns out that if the world simulation hypothesis is true we must be the outside users of it, essentially deliberately experiencing existence in a massive universe simulation generated maybe by our greater selves in a higher reality.
[-] The following 2 users Like nbtruthman's post:
  • Sciborg_S_Patel, Valmar
(2025-01-06, 12:24 AM)Sciborg_S_Patel Wrote: Given all the positive hype about AI that is on the net it doesn't surprise me an LLM will speak about itself in those terms, though it might possibly be tailored to give a favorable - and industry friendly - response about its own abilities.

I don't find its response surprising given its general capabilities. I don't suspect favourable tailoring.

(2025-01-06, 12:24 AM)Sciborg_S_Patel Wrote: That said, looking at DSL design systems like Jet Brains MPS it's not clear to me that what it does is beyond the bounds of expectation so long as humans-in-the-loop are presenting concepts in varied domains in a way that is manageable by an AI.

I'm genuinely not clear on what the Jet Brains MPS has to do with AI in this context.

(2025-01-06, 12:24 AM)Sciborg_S_Patel Wrote: To be clear it's all very impressive as a human achievement in faking thought, just not seeing anything that makes me really think anything metaphysically special is going on.

Here are the two things that I think are metaphysically special:

Firstly, it is very surprising to me that a conceptual model of reality can be constructed merely from processing (learning from) words, without any explicit way of linking those words to concepts via experience. Experience, it seems to me, is how humans construct conceptual models of reality, and then, through help from adults, link those concepts to words, or develop new concepts based on words to which they're exposed in the context of an experience of their meaning.

Prior to LLMs proving that it can be done, I would not have expected that simply by processing (learning from) a bunch of words could a machine generate a conceptual model of reality such that it can respond meaningfully, intelligently, and insightfully to natural-language questions. I would have thought that at best it could have learnt to write syntactically-correct responses with a mish-mash of relevant words that might generally occur in that context, but which overall are meaningless, sort of like the "context-free grammar" text generated by tools like SCIgen.

Secondly, as I pointed out earlier, that LLMs do seem to have a conceptual "understanding" of reality such that they can respond just like a conscious human leaves open to the epiphenomenalist/physicalist to argue that it's the same for humans: all of our true conceptual understanding is merely embedded in the physical neural networks of our brains, with the phenomenal experience of understanding merely a casually inert tack-on like steam off a steam engine.

As I also wrote earlier, I don't think that the argument succeeds - and, as you pointed out earlier, there's anyway a stronger, knock-down argument against epiphenomenalism - but it's worth being aware of it.
[-] The following 1 user Likes Laird's post:
  • nbtruthman
(2025-01-06, 02:38 AM)Valmar Wrote: You completely overlooked the context I gave! Algorithms don't exist in a void ~ they need a CPU to run on! So, a powerful CPU plus algorithm is what composes a brute-force algorithm!

I gave you a hint, but you didn't take it. OK, let's do this the hard way:

A brute-force algorithm is one which tries every possible solution until it finds the correct one. LLMs do not use brute-force algorithms.

You misused a technical computer science term. You were wrong. You should admit it.

See my above response to Sci regarding the rest.
(2025-01-08, 07:15 AM)Laird Wrote: I gave you a hint, but you didn't take it. OK, let's do this the hard way:

A brute-force algorithm is one which tries every possible solution until it finds the correct one. LLMs do not use brute-force algorithms.

You misused a technical computer science term. You were wrong. You should admit it.

See my above response to Sci regarding the rest.

Rereading my own reply... nowhere did I mention about a brute-force algorithm.

An algorithm alone simply isn't sufficient for brute-forcing ~ for that you need both immense processing power and an immense dataset for the algorithm to chew through.

So, no, I wasn't wrong. Please re-read my replies more closely.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


(2025-01-08, 07:14 AM)Laird Wrote: I don't find its response surprising given its general capabilities. I don't suspect favourable tailoring.


I'm genuinely not clear on what the Jet Brains MPS has to do with AI in this context.


Here are the two things that I think are metaphysically special:

Firstly, it is very surprising to me that a conceptual model of reality can be constructed merely from processing (learning from) words, without any explicit way of linking those words to concepts via experience. Experience, it seems to me, is how humans construct conceptual models of reality, and then, through help from adults, link those concepts to words, or develop new concepts based on words to which they're exposed in the context of an experience of their meaning.

Prior to LLMs proving that it can be done, I would not have expected that simply by processing (learning from) a bunch of words could a machine generate a conceptual model of reality such that it can respond meaningfully, intelligently, and insightfully to natural-language questions. I would have thought that at best it could have learnt to write syntactically-correct responses with a mish-mash of relevant words that might generally occur in that context, but which overall are meaningless, sort of like the "context-free grammar" text generated by tools like SCIgen.

Secondly, as I pointed out earlier, that LLMs do seem to have a conceptual "understanding" of reality such that they can respond just like a conscious human leaves open to the epiphenomenalist/physicalist to argue that it's the same for humans: all of our true conceptual understanding is merely embedded in the physical neural networks of our brains, with the phenomenal experience of understanding merely a casually inert tack-on like steam off a steam engine.

As I also wrote earlier, I don't think that the argument succeeds - and, as you pointed out earlier, there's anyway a stronger, knock-down argument against epiphenomenalism - but it's worth being aware of it.

Jetbrains MPS is about creating Domain Specific Languages. You can constrain and type check these languages based on the conceptual aspects of each Domain. This can get incredibly powerful, allowing people to put all kinds of domain knowledge in programmatic form.

Now that I know AI companies have essentially wage-slaved the Global South to handle a lot of training I can see how that amount of labor + the kind of Domain definitions you can do with DSLs is able to fake thought in the replies of these programs.

As such I don’t think the examples you use about why it’s metaphysically interesting carry weight.

To me the more interesting cases would be game AI and theorem provers. Those, however, are obviously not conscious once one sees how they work as well. It’s all a good magic trick when facing the screen, but once we look at the code behind the curtain the magic is gone even if we can admire the trick.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell



  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)