AI megathread

301 Replies, 12264 Views

Norvig's book Paradigms of AI Programming was made available by him for free on Github, going to quote a section from Chapter 4: General Problem Solver

Quote:The General Problem Solver, developed in 1957 by Alan Newell and Herbert Simon, embodied a grandiose vision: a single computer program that could solve any problem, given a suitable description of the problem. GPS caused quite a stir when it was introduced, and some people in AI felt it would sweep in a grand new era of intelligent machines. Simon went so far as to make this statement about his creation:

Quote:    It is not my aim to surprise or shock you.... But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

Although GPS never lived up to these exaggerated claims, it was still an important program for historical reasons. It was the first program to separate its problem solving strategy from its knowledge of particular problems, and it spurred much further research in problem solving. For all these reasons, it is a fitting object of study.

Quote:The original GPS program had a number of minor features that made it quite complex. In addition, it was written in an obsolete low-level language, IPL, that added gratuitous complexity. In fact, the confusing nature of IPL was probably an important reason for the grand claims about GPS. If the program was that complicated, it must do something important. We will be ignoring some of the subtleties of the original program, and we will use Common Lisp, a much more perspicuous language than IPL. The result will be a version of GPS that is quite simple, yet illustrates some important points about AI.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-09, 01:10 AM by Sciborg_S_Patel. Edited 2 times in total.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Typoz, Valmar
I think there’s some confusion here. A DSL is based on a grammar that’s used to transform DSL statements into a syntax tree. It’s then the developer’s task to implement deterministic rules in code for each node in such a syntax tree. AI, however, works fundamentally differently. The logic doesn’t reside in the code lines but emerges from patterns in data. This is why AI is such a breakthrough—it challenges the idea that intelligence is something transcendent or mysterious. Instead, it shows that what we think of as intelligence might simply be the ability to recognize patterns, make predictions, and adapt based on data.

Much like how babies learn about the world through experience—observing patterns, experimenting, and refining their understanding—AI models learn from enormous datasets to generalize and solve problems. Whether it’s natural language, image recognition, or protein folding, AI demonstrates that complex problem-solving can arise from computational processes rather than any mystical property. This suggests that intelligence, far from being an ineffable quality, could simply be the result of sufficiently rich patterns and processing. In this context, I think @Laird makes a fair point.
(2025-01-09, 08:25 AM)sbu Wrote: I think there’s some confusion here. A DSL is based on a grammar that’s used to transform DSL statements into a syntax tree. It’s then the developer’s task to implement deterministic rules in code for each node in such a syntax tree. AI, however, works fundamentally differently. The logic doesn’t reside in the code lines but emerges from patterns in data. This is why AI is such a breakthrough—it challenges the idea that intelligence is something transcendent or mysterious. Instead, it shows that what we think of as intelligence might simply be the ability to recognize patterns, make predictions, and adapt based on data.

Much like how babies learn about the world through experience—observing patterns, experimenting, and refining their understanding—AI models learn from enormous datasets to generalize and solve problems. Whether it’s natural language, image recognition, or protein folding, AI demonstrates that complex problem-solving can arise from computational processes rather than any mystical property. This suggests that intelligence, far from being an ineffable quality, could simply be the result of sufficiently rich patterns and processing. In this context, I think @Laird makes a fair point.

I think you’re missing my point. It’s not that there are ASTs  being used, it’s that a lot of domain knowledge can be captured programmatically by using the right structures.

Why I said combine that knowledge with the fact the AI companies have humans-in-the-loop working as wage-slaves around the world to aid AIs with their seeming semantic understanding.

Also you’re ignoring the important point @Valmar and I keep raising which is that AI looks impressive but one has to actually look under the hood - such as the way “temperature” is randomized - to see how the magic trick is actually done.

My contention is that the “black box” aspect of machine “learning” is all that separates current LLMs from the old claims about GPS.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-09, 10:44 AM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-01-09, 10:44 AM)Sciborg_S_Patel Wrote: I think you’re missing my point. It’s not that there are ASTs  being used, it’s that a lot of domain knowledge can be captured programmatically by using the right structures.

Why I said combine that knowledge with the fact the AI companies have humans-in-the-loop working as wage-slaves around the world to aid AIs with their seeming semantic understanding.

Also you’re ignoring the important point @Valmar and I keep raising which is that AI looks impressive but one has to actually look under the hood - such as the way “temperature” is randomized - to see how the magic trick is actually done.

My contention is that the “black box” aspect of machine “learning” is all that separates current LLMs from the old claims about GPS.

You’re judging a technology that only reached prominence around 2018, yet large language models (LLMs) have already demonstrated how much can be derived from data alone. While I still believe there’s an elusive element that underpins consciousness, much of what shapes our humanity seems to be encoded in the brain and is formed through early sensory input. This becomes evident in stroke patients who, after sustaining damage to specific brain regions, may lose cognitive abilities such as complex reasoning, recognition, or empathy—showing how central these data-driven processes are to our identity.
(2025-01-09, 12:01 PM)sbu Wrote: You’re judging a technology that only reached prominence around 2018, yet large language models (LLMs) have already demonstrated how much can be derived from data alone. While I still believe there’s an elusive element that underpins consciousness, much of what shapes our humanity seems to be encoded in the brain and is formed through early sensory input. This becomes evident in stroke patients who, after sustaining damage to specific brain regions, may lose cognitive abilities such as complex reasoning, recognition, or empathy—showing how central these data-driven processes are to our identity.

Seems like this is just Compuationalism of the Gaps? Otherwise you could tell me how the input leads to the output in a machine learning program and then I would be forced to concede that execution trace shows something metaphysically significant is happening.

When I was younger there were times I was blown away by video game AI, specifically two flying imps in Diablo I who would circle me. When I went to attack one of them I'd have to swing my sword, and this gave the other imp the opportunity to strike me from behind. I recall even telling my friends about how amazing this was, but this wonder faded as I became more knowledgeable about how it worked under the hood.

Instead that wonder turned to the incredible ingenuity of human consciousness, something I became even more impressed with when I saw how the Universal Truths of Mathematics underpinned the foundations of all maths and thus all sciences.

Similarly, when I was older and had gotten my bachelor's in maths, I was amazed by theorem provers but then I learned more about how they worked.

Perhaps given my own experience with the magic trick of AI, it's just confusing to me people can know how AI companies exploit workers as humans in the loop, understand the basics of machine learning, and see how complex domains of human knowledge can be structured programmatically and then still assert that something metaphysically significant is going on that wasn't going on with any game AI or any Theorem Prover. 

Games involve real time decision making, proofs of theorems are a higher mathematical abstraction, yet once one understands how the AI works the seeming novelty is gone. LLMs, AFAICTell, are like the GPS of old - just one more case of people being impressed by the output and then inferring something special is happening under the hood. My guess is as we become better able to "unwind" the "black box" of machine learning and actually trace the program flow from input to output the illusion will become less impressive. 

In the meantime the basic reality of what a Turing Machine running a program actually is and how it differs from our own embodies selves is something even a materialist like Searle can see, as per his paper Is the Brain a Digital Computer? The Dualist (of sorts) Lanier also makes similar arguments in the excellent You Can't Argue with Zombie.

As for the example of stroke patients all we know about the brain - an object in experience - is that our experience of interacting with that icon is correlated with aspects of our general experience. There's nothing "encoded" in the brain, if by which you mean our thoughts about things are somehow intrinsically connected to some brain structure. See neurocientist Erik Hoel's essay Neuroscience is pre-paradigmatic. Consciousness is Why.

Now all that said, I do want reiterate that I do believe synthetic life will be possible one day when we figure out the correct minimally necessary structures in our own embodies consciousness that allows for this (synthetic microtubules playing a role is my bet). But it won't help us decide much to anything about which "ism" of consciousness is the right one, because the AI question isn't a Materialism vs Non-Materialism one.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-01-09, 12:31 PM by Sciborg_S_Patel.)
(2025-01-09, 08:25 AM)sbu Wrote: I think there’s some confusion here. A DSL is based on a grammar that’s used to transform DSL statements into a syntax tree. It’s then the developer’s task to implement deterministic rules in code for each node in such a syntax tree. AI, however, works fundamentally differently. The logic doesn’t reside in the code lines but emerges from patterns in data.

Nevermind that there is nothing to support this idea ~ it is also entirely illogical. The logic only resides in code ~ logic cannot "emerge" from patterns of data. An "artificial intelligence" is just fundamentally an algorithm, and nothing more. Anything else is just marketing.

(2025-01-09, 08:25 AM)sbu Wrote: This is why AI is such a breakthrough—it challenges the idea that intelligence is something transcendent or mysterious. Instead, it shows that what we think of as intelligence might simply be the ability to recognize patterns, make predictions, and adapt based on data.

"Artificial intelligence" "challenges" nothing except a strawman concept of intelligence. Intelligence has nothing specifically to do with pattern recognition or making predictions or adapting based on data. Intelligence, rather, is the ability the apply knowledge and understanding from existing experience to new experiences in unique and clever ways ~ to creatively think outside of the box in terms of problem solving.

"Artificial intelligence" is defined by the box of the dataset, which is also the algorithm has to iterate over. That algorithm cannot "think" outside of that box, because nothing exists outside of the box, hence there is no creativeness nor intelligence.

(2025-01-09, 08:25 AM)sbu Wrote: Much like how babies learn about the world through experience—observing patterns, experimenting, and refining their understanding—AI models learn from enormous datasets to generalize and solve problems.

This is a rather severe false equivocation... AI models do not do anything remotely similar to babies, not even in the poorest of metaphorical senses.

(2025-01-09, 08:25 AM)sbu Wrote: Whether it’s natural language, image recognition, or protein folding, AI demonstrates that complex problem-solving can arise from computational processes rather than any mystical property.

Computation itself is not "problem solving", not does or can it arise from it. Computation is simply a blind, abstract process of physical processes that extremely talented human engineers developed for the purpose of doing mathematics more quickly. Which then evolved into something more through more and more and more abstractions piled on top.

But at a physical level, there is still nothing but electrons whizzing through circuits ~ and the circuits themselves are abstractions, even!

(2025-01-09, 08:25 AM)sbu Wrote: This suggests that intelligence, far from being an ineffable quality, could simply be the result of sufficiently rich patterns and processing. In this context, I think @Laird makes a fair point.

But there is no evidence to suggest it, apart from fanciful imagination.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 12:01 PM)sbu Wrote: You’re judging a technology that only reached prominence around 2018, yet large language models (LLMs) have already demonstrated how much can be derived from data alone.

Again, you are not looking behind the curtain... you are simply judging the book by its cover.

LLMs are only as good as they are because they have absolutely gargantuan datasets that are trained on algorithms crafted by extremely clever programmers, that are running of hardware that is extremely powerful. The data centers these LLMs are run in consume massive amounts of power and produce astonishing amounts of heat.

(2025-01-09, 12:01 PM)sbu Wrote: While I still believe there’s an elusive element that underpins consciousness, much of what shapes our humanity seems to be encoded in the brain and is formed through early sensory input. This becomes evident in stroke patients who, after sustaining damage to specific brain regions, may lose cognitive abilities such as complex reasoning, recognition, or empathy—showing how central these data-driven processes are to our identity.

You are simply confusing metaphor for reality. The brain is not a computer ~ not in the simplest or most complex of metaphors.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 12:31 PM)Sciborg_S_Patel Wrote: As for the example of stroke patients all we know about the brain - an object in experience - is that our experience of interacting with that icon is correlated with aspects of our general experience. There's nothing "encoded" in the brain, if by which you mean our thoughts about things are somehow intrinsically connected to some brain structure. See neurocientist Erik Hoel's essay Neuroscience is pre-paradigmatic. Consciousness is Why.

Now all that said, I do want reiterate that I do believe synthetic life will be possible one day when we figure out the correct minimally necessary structures in our own embodies consciousness that allows for this (synthetic microtubules playing a role is my bet). But it won't help us decide much to anything about which "ism" of consciousness is the right one, because the AI question isn't a Materialism vs Non-Materialism one.

I didn't advocate for computationalism as I think think there's still a mysterious component missing just as Wilder Penfield concluded in his brain stimulation studies. But he could for example stimulate memories and as we know memories can also be lost by damage to the brain. So yes I very much believe there's a lot encoded in the brain that's lost on damage and death. We also know stroke can cause people to change behaviour etc etc. - so I don't think the mysterious component is related to our individual identity.
(This post was last modified: 2025-01-09, 12:52 PM by sbu. Edited 1 time in total.)
[-] The following 1 user Likes sbu's post:
  • Sciborg_S_Patel
(2025-01-09, 12:51 PM)sbu Wrote: I didn't advocate for computationalism as I think think there's still a mysterious component missing just as Wilder Penfield concluded in his brain stimulation studies. But he could for example stimulate memories and as we know memories can also be lost by damage to the brain. So yes I very much believe there's a lot encoded in the brain that's lost on damage and death. We also know stroke can cause people to change behaviour etc etc. - so I don't think the mysterious component is related to our individual identity.

You need a lot to back up the claim of "stimulating" memories.

Memories have never been demonstrated to be actually "lost", so much as simply forgotten. Case-in-point ~ terminal lucidity, where despite extremely advanced dementia or Alzheimer's, the patient has a sudden perfect recall of their memories and personality shortly before death.

So damage to the brain does not actually cause "loss", but something else.

Same thing with strokes ~ they do not destroy memories. They simply damage recall of certain memories.

Brain damage overall is best understood in terms of filter theory ~ you damage the filter, you're not damaging consciousness or the contents itself, just the expression of consciousness through the brain filter.

So ~ there is no "mysterious component", no missing link. There is simply the brain ~ which is precisely as it appears to be, and the mind, which is never observed beyond the effects it has on the brain. Because neuroscience cannot see the mind, only the effects, they conclude that the mind doesn't exist, and that the brain itself is responsible. A mistake of logic.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung


[-] The following 1 user Likes Valmar's post:
  • Sciborg_S_Patel
(2025-01-09, 12:51 PM)sbu Wrote: I didn't advocate for computationalism as I think think there's still a mysterious component missing just as Wilder Penfield concluded in his brain stimulation studies. But he could for example stimulate memories and as we know memories can also be lost by damage to the brain. So yes I very much believe there's a lot encoded in the brain that's lost on damage and death.

Terminal lucidity suggests brain damage prevents the embodied mind's access to its faculties, not that the faculties are lost for all time.

For a brain to encode memories it seems to me there would have to be some matter that somehow, by its arrangement, intrinsically is about what is contained in the memory. But why would that be the case when we can project an infinite number of meanings to any structure?

If the key is the structural arrangement of the matter which has no intrinsic mental character, then this seems more like Platonism or Dualist Parallelism then Materialism to me. I guess another option would be something akin to what Hoffman & Arvan claim, that Irreducible Consciousness is already around but is localized by structure.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar

  • View a Printable Version
Forum Jump:


Users browsing this thread: 9 Guest(s)