AI megathread

301 Replies, 12187 Views

(2025-01-08, 07:34 PM)Sciborg_S_Patel Wrote: Jetbrains MPS is about creating Domain Specific Languages. You can constrain and type check these languages based on the conceptual aspects of each Domain. This can get incredibly powerful, allowing people to put all kinds of domain knowledge in programmatic form.

Yes, I saw that. I still don't see what it's got to do with LLMs. That's not how they work.

My point isn't that digital conceptual modelling is impossible, but that it's very, very surprising that LLMs can conceptually model the world purely as a result of processing a bunch of words about it, without any explicit way to relate those words to their referents, i.e., meaning.

(2025-01-08, 07:34 PM)Sciborg_S_Patel Wrote: Now that I know AI companies have essentially wage-slaved the Global South to handle a lot of training

That's also not relevant to the points I made. That the algorithms behind LLMs require quality data vetted by humans doesn't detract from their achievements. Wage-slavery is a separate issue to which I won't respond here beyond affirming that of course I oppose it.

(2025-01-08, 07:34 PM)Sciborg_S_Patel Wrote: As such I don’t think the examples you use about why it’s metaphysically interesting carry weight.

You don't find it metaphysically interesting that merely learning from digital representations of words without having any explicit means of associating those words with concepts, as we humans do through conscious experience, can result in a conceptual model of the world and sufficient "understanding" to talk in natural language about the world in appropriate, intelligent, and even insightful ways?

I find that very strange given your typical intellectual curiosity.

(2025-01-08, 07:34 PM)Sciborg_S_Patel Wrote: To me the more interesting cases would be game AI and theorem provers.

Why? What is more compelling about game AI than LLMs? And is AI even being used in theorem provers these days?
(2025-01-09, 08:25 AM)sbu Wrote: I think there’s some confusion here. A DSL is based on a grammar that’s used to transform DSL statements into a syntax tree. It’s then the developer’s task to implement deterministic rules in code for each node in such a syntax tree.

It seems as though MPS goes even further and turns a DSL into working code.

(2025-01-09, 08:25 AM)sbu Wrote: AI, however, works fundamentally differently.

Right. Mostly neural networks, as far as I understand it, albeit very sophisticated ones.

(2025-01-09, 08:25 AM)sbu Wrote: The logic doesn’t reside in the code lines but emerges from patterns in data. This is why AI is such a breakthrough—it challenges the idea that intelligence is something transcendent or mysterious. Instead, it shows that what we think of as intelligence might simply be the ability to recognize patterns, make predictions, and adapt based on data.

Much like how babies learn about the world through experience—observing patterns, experimenting, and refining their understanding—AI models learn from enormous datasets to generalize and solve problems. Whether it’s natural language, image recognition, or protein folding, AI demonstrates that complex problem-solving can arise from computational processes rather than any mystical property. This suggests that intelligence, far from being an ineffable quality, could simply be the result of sufficiently rich patterns and processing. In this context, I think @Laird makes a fair point.

That seems fair, yes, although as I wrote I don't believe that conscious ("real" rather than "artificial") intelligence actually can be reduced in that way, nor even that it functions in that way other than perhaps for its embodied aspect in the brain.
(2025-01-10, 08:32 AM)Laird Wrote: Rather than acknowledge that you misunderstood a comp-sci term, and accept that were wrong, you've chosen to duck, dodge, weave, and double down. Poor form, dude.

... what?  Confused
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


(2025-01-10, 06:23 AM)Valmar Wrote: I'm actually not sure this is possible... I don't think that there is a "minimally necessary structural aspect" of the body that "instantiates" a conscious entity. After all, how do conscious entities enter the world? By entering a fetus that is of the right stage of growth ~ at a very specific stage. What was it, 49 days? Even the Tibetan Buddhists knew this, long before medical science ever confirmed it. So how they attained that knowledge is impressive.

Prom my perspective, given various studies or experiences of others, I'd suggest the '49 days' part is vastly over-reaching, trying to pin down something which we actually don't know. For example Helen Wambach gathered accounts from hypnotic regression which had some instances of consciousness wandering in and out of the developing foetus, and broadly one could place the time of entry anywhere in the range of before conception to after physical birth. Of course one should not depend only on Wambach, but sometimes NDEers are able to describe things over a longer timescale than ordinarily expected and tend to offer more ambiguity rather than precision on association of consciousness with a body.
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, Valmar
(2025-01-10, 10:52 AM)Typoz Wrote: Prom my perspective, given various studies or experiences of others, I'd suggest the '49 days' part is vastly over-reaching, trying to pin down something which we actually don't know. For example Helen Wambach gathered accounts from hypnotic regression which had some instances of consciousness wandering in and out of the developing foetus, and broadly one could place the time of entry anywhere in the range of before conception to after physical birth. Of course one should not depend only on Wambach, but sometimes NDEers are able to describe things over a longer timescale than ordinarily expected and tend to offer more ambiguity rather than precision on association of consciousness with a body.

I find that a bit odd, because it doesn't explain why the baby is alive outside of the womb if the incarnate isn't in the body yet... because it is the incarnate's mind that animates the physical form... besides, 49 days correlates with the time that the heart starts beating. What is happening if not incarnation...?

Besides that... hypnotic regression has been criticized for being rather inaccurate at times, so I am unsure what to believe about that currently.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


(2025-01-10, 08:34 AM)Laird Wrote: That seems fair, yes, although as I wrote I don't believe that conscious ("real" rather than "artificial") intelligence actually can be reduced in that way, nor even that it functions in that way other than perhaps for its embodied aspect in the brain.

There is clearly an unknown process bridging the gap between brain states and human consciousness. While artificial intelligence leverages existing latent structures in data—for example, in applications such as protein folding—some people mistakenly conflate that process with the theoretical limits of Turing machines. Yet the deeper mystery, to my mind, is whether the very existence of these latent structures might be a prerequisite for complex human behaviors, including speech. If kids are not exposed to speech before a certain age for example, they never learn to speak.
(2025-01-10, 12:13 PM)sbu Wrote: There is clearly an unknown process bridging the gap between brain states and human consciousness. While artificial intelligence leverages existing latent structures in data—for example, in applications such as protein folding—some people mistakenly conflate that process with the theoretical limits of Turing machines. Yet the deeper mystery, to my mind, is whether the very existence of these latent structures might be a prerequisite for complex human behaviors, including speech. If kids are not exposed to speech before a certain age for example, they never learn to speak.

I don't quite understand this. Can you elaborate a little on what you mean by the latent structures which might be prerequisites for speech, and how/why they might be prerequisites?
(2025-01-10, 12:30 PM)Laird Wrote: I don't quite understand this. Can you elaborate a little on what you mean by the latent structures which might be prerequisites for speech, and how/why they might be prerequisites?

The latent structures in data are what really enable modern AI, much like how the existence of superposition enables quantum computing. Any link to human consciousness—or the evolving consciousness in infants—is merely my own metaphysical speculation without any evidence. I only wonder why we need a brain at all and why brain damage impairs cognition. That’s all
(2025-01-10, 11:16 AM)Valmar Wrote: Besides that... hypnotic regression has been criticized for being rather inaccurate at times, so I am unsure what to believe about that currently.

The problem which I observe is that hypnotic regression is unfashionable, it still takes place but proper controlled scientific studies are not in the current focus. Wambach's work dates back to the 1970s or 80s at best. Why one should take it somewhat seriously, she herself explained that.

Hence we have to depend upon snippets obtained here and there. But as I mentioned, NDE accounts are another example of a source. At any rate I don't personally feel any certainty on the topic and wouldn't feel comfortable endorsing some specified timing for this occurrence (entry of consciousness). There is a danger in attempting to construct an established dogma and placing those who offer dissent in the role of heretic.
[-] The following 2 users Like Typoz's post:
  • Valmar, Sciborg_S_Patel
(2025-01-10, 08:33 AM)Laird Wrote: Yes, I saw that. I still don't see what it's got to do with LLMs. That's not how they work.

My point isn't that digital conceptual modelling is impossible, but that it's very, very surprising that LLMs can conceptually model the world purely as a result of processing a bunch of words about it, without any explicit way to relate those words to their referents, i.e., meaning.

That's also not relevant to the points I made. That the algorithms behind LLMs require quality data vetted by humans doesn't detract from their achievements. Wage-slavery is a separate issue to which I won't respond here beyond affirming that of course I oppose it.

You don't find it metaphysically interesting that merely learning from digital representations of words without having any explicit means of associating those words with concepts, as we humans do through conscious experience, can result in a conceptual model of the world and sufficient "understanding" to talk in natural language about the world in appropriate, intelligent, and even insightful ways?

I find that very strange given your typical intellectual curiosity.

Why? What is more compelling about game AI than LLMs? And is AI even being used in theorem provers these days?

Oh I didn't think you were in support of wage slaves, I was bringing that up as part of my reply in tandem with noting that MPS allows one to programmatically encode domain knowledge.

LLMs don't have the same structural representation as MPS Abstract Syntax Trees for DSLs, but they do have the multiple dimensional token space at minimum. There has at least been talk about adding a further semantic layer, though I would have to check individual companies about how much this has been implemented.

And trust that my intellectual curiosity is very peaked, just it's the same way when I saw David Copperfield and other stage magicians pull of[f] amazing illusions. I want to know how the trick was done, but sadly AI companies seem to treat the current black box nature of AI [as] a curtain behind which all sorts of claims can be issued about programs granting cognition to Turing Machines.

Ideally we'll have the ability to actually trace execution in the coming years, though I don't even know if LLM technology will be profitable given the potential diminishing returns and massive power consumption. Ideally the former pops the bubble, as I definitely don't want untrustworthy, and often mismanaged software companies to be in charge of power plants:

Why Big Tech is turning to nuclear to power its energy-intensive AI ambitions
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-10, 10:26 PM by Sciborg_S_Patel. Edited 2 times in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar

  • View a Printable Version
Forum Jump:


Users browsing this thread: 6 Guest(s)