Another demonstration of chatGPT 4.0 capabilities

139 Replies, 6818 Views

(2024-09-29, 07:14 AM)Laird Wrote: What do you mean by "truly self-learning" and why do you think it remains out of reach for now? When and why will that change?

Artificial General Intelligence (AGI) is often considered a vague concept, primarily because we lack a clear definition of human intelligence. Without understanding what human intelligence truly entails, how can we hope to define AGI?

Current AI models learn from data by minimizing an error or loss function, which is carefully defined by humans. In contrast, a newborn human learns through experience. A newborn "knows" very little, aside from basic impulses like feeding. However, through curiosity—another concept we have yet to fully define—humans begin exploring the world using their five senses. Almost magically, around the age of two, a child begins to recognize their own reflection in a mirror and starts to realize they are independent beings, separate from their parents. Some people even remember this profound moment of self-awareness.

AI systems, on the other hand, lack the ability to independently explore new ideas or concepts beyond their training data and parameters. They don't spontaneously generate hypotheses or conduct experiments in their environment like humans do. Furthermore, AIs suffer from catastrophic forgetting, a phenomenon where learning new data can cause previously learned information to be lost. In humans, however, learning new knowledge does not erase old memories or understanding.

The new ChatGPT model operates using a method called reinforcement learning, which is akin to iterative trial and error—similar to how we approach problem-solving in mathematics. Yet, the outcome in AI is driven by the optimization of a human-specified reward function. This approach, while effective, appears to limit the generality of the model. In fact, there are certain tasks that earlier versions of ChatGPT can perform better than the newer versions, likely due to these limitations in generalization.

I’m uncertain whether AGI is truly achievable, but it is clear that the current strategies for AI are far from reaching AGI, especially given that we don't even have a clear definition of what AGI really means.
(This post was last modified: 2024-09-29, 07:52 PM by sbu. Edited 1 time in total.)
[-] The following 3 users Like sbu's post:
  • Larry, Laird, nbtruthman
Thanks for the elaboration. You're right that "intelligence" is hard to define, and that "artificial" general intelligence is thus a murky goal, and I agree that the (true) intelligence of living beings including humans is inseparable from consciousness, including traits like curiosity, experience, and spontaneity. I'm not as up to date on current AI models as you are, so it seems I've overestimated how close they are to achieving the - however murkily defined - goal of AGI.
(2024-09-30, 06:19 AM)Laird Wrote: Thanks for the elaboration. You're right that "intelligence" is hard to define, and that "artificial" general intelligence is thus a murky goal, and I agree that the (true) intelligence of living beings including humans is inseparable from consciousness, including traits like curiosity, experience, and spontaneity. I'm not as up to date on current AI models as you are, so it seems I've overestimated how close they are to achieving the - however murkily defined - goal of AGI.

Update: This week, I’ve wasted at least 10 working hours on the new 'ChatGPT-01' wonder! I can’t say it enough times – it can’t 'think'. Don’t believe in the hype.
[-] The following 2 users Like sbu's post:
  • Laird, Larry
(2024-10-01, 03:11 PM)sbu Wrote: Update: This week, I’ve wasted at least 10 working hours on the new 'ChatGPT-01' wonder! I can’t say it enough times – it can’t 'think'. Don’t believe in the hype.

Is there an alternative word (or simple phrase) for "think" that you'd use to describe what it can do?
(2024-10-02, 05:27 AM)Laird Wrote: Is there an alternative word (or simple phrase) for "think" that you'd use to describe what it can do?

Instead of AI with an emphasis on the I it would probably be better just to refer to it as a LLM (Large Language Model) which it certainly is. 

Note that regarding it’s mathematical proficiency tools like Mathematica and Maple has been around for years that can do the same.
(This post was last modified: 2024-10-02, 06:51 AM by sbu. Edited 1 time in total.)
[-] The following 2 users Like sbu's post:
  • Typoz, Laird
Interesting but rather technical video that explains the challenges AI faces with reducing errors ("hallucinations").

This will have to be overcome before AI hits what SBU spoke about regarding being able to replace knowledge workers.  For functions that require near or actual error-free decision making, what we're doing now with the current tech doesn't look like it can get there.

https://youtu.be/5eqRuVp65eY?si=uMBX0oWsfoqxvj6f
I'd like to know whether all the publicly available AI's are built using data containing chunks of snapshots of the internet. If not, what other datasets are used for this purpose?

Snapshots of the internet would seem to be inherently flawed even if is has been cleansed in some way, and quite irrespective of neural scaling laws!

David
(2024-10-04, 03:45 PM)David001 Wrote: I'd like to know whether all the publicly available AI's are built using data containing chunks of snapshots of the internet. If not, what other datasets are used for this purpose?

Snapshots of the internet would seem to be inherently flawed even if is has been cleansed in some way, and quite irrespective of neural scaling laws!

David

The data sources, and particularly the weights assigned to each, are likely among OpenAI’s most closely guarded assets, as the differences in neural network architecture compared to competitors are probably minimal. One thing is certain: it’s not based on simple internet scraping with uniform weighting assigned to every data source.
[-] The following 1 user Likes sbu's post:
  • Silence
(2024-10-05, 10:33 AM)sbu Wrote: The data sources, and particularly the weights assigned to each, are likely among OpenAI’s most closely guarded assets, as the differences in neural network architecture compared to competitors are probably minimal. One thing is certain: it’s not based on simple internet scraping with uniform weighting assigned to every data source.

However, I think it is based on a snapshot of the internet. ChatGPT-3 actually tells you this in some situations. I also remember that one of these systems had to be fixed because people found they could access a lot of pornographic 'information' from the internet.

David
(2024-10-05, 03:59 PM)David001 Wrote: However, I think it is based on a snapshot of the internet. ChatGPT-3 actually tells you this in some situations. I also remember that one of these systems had to be fixed because people found they could access a lot of pornographic 'information' from the internet.

David

They are not able to self-evolve if that’s what you mean. After training the model is static in time. You can ask it to access the Internet but that’s equaivalent to you copying and pasting the corresponding text and images into the chat.
(This post was last modified: 2024-10-05, 07:43 PM by sbu. Edited 4 times in total.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)