(2024-12-20, 07:40 PM)Sciborg_S_Patel Wrote: I'd say the keyword is "simulated".
I don't believe LLMs understand anything semantically.
LLMs have never once been shown to comprehend anything semantically. Nevermind that the very computational nature of LLMs excludes any possibility of semantic reasoning. It's abstractions and metaphors all the way down to the purely physical and chemical level...
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung
I agree and have nothing to add except that it's very puzzling that LLMs are capable of making novel and insightful statements of meaning without understanding what they're saying.
(Yesterday, 01:51 AM)Laird Wrote: I agree and have nothing to add except that it's very puzzling that LLMs are capable of making novel and insightful statements of meaning without understanding what they're saying.
Well someone else made the original insightful statement, then the AI companies stole it.
The AI is good at rewording stolen, possibly copyrighted information though.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
(Yesterday, 01:51 AM)Laird Wrote: I agree and have nothing to add except that it's very puzzling that LLMs are capable of making novel and insightful statements of meaning without understanding what they're saying.
It's not puzzling when you understand that LLMs are just algorithms placing and taking information from a database based on inputs, with a hint of randomness to give the illusion that there's no algorithm happening.
“Everything that irritates us about others can lead us to an understanding of ourselves.”
~ Carl Jung