(2024-11-05, 11:19 PM)nbtruthman Wrote: Michael Tymn recently conducted a very interesting experiment with a generative AI system. He conducted an "interview" where he asked the AI system a series of very important questions regarding the evidence for survival of death, and how it would explain some notable relevant experiments in this area conducted by a famous psychical researcher in the past. To answer these questions well and thoroughly seemingly requires a lot of intelligent unbiased thinking, the ability to generalize, and the ability to filter out all the strident voices of scientism proclaiming the supposed truth of materialism and the impossibility of an afterlife.
http://whitecrowbooks.com/michaeltymn/en...fter_death
Frankly, I was impressed by the quality of a number of the answers. These answers I think generally made reasonable sense, seemed knowledgeable and amazingly were quite even-handed and unbiased when discussing a matter so steeped in controversy . It was hard to believe this material was generated by a complex non-thinking "thing", by computer Internet data searches and execution of complicated algorithms. There was a strong impression of communication with a rational intelligent agent, I guess only showing how possible (though computationally difficult) it is to fool us. The apparently mostly unbiased even-handed evaluations of the generative AI system are hard to explain especially given that the Internet data it was utilizing contains so much negative and very biased material on the possibility of survival. As for instance Wiki.
The answers did become somewhat repetitive, for instance repeating comments to the effect that scientific acceptance requires repeated demonstration on demand. The AI did correctly note that that this is impossible for the paranormal phenomena indicative of survival. Generally, most of the answers pointed out that the subject matter of survival and an afterlife has a certain body of evidence from paranormal phenomena like NDEs and reincarnation but that this evidence is mostly considered anecdotal or unscientific. It never stated that this negative opinion ignores much of the data and its quality, but the AI repeated the comment that it is controversial and the data is much questioned by science.
The AI system never stated (as I think it would be expected to state) the general prevailing scientistic conviction and "party line" that the subject matter is wish-fulfilling superstition and imagination. You would think that this last answer would be automatically gleaned by the AI from the very extensive skeptical and closed-minded material on survival and an afterlife on the Internet, where paranormal proponents are a small minority.
It occurs to me that it is almost as if the creators of this system deliberately set up pragmatic rules for the AI that it would answer questions on controversial subjects in such a way as not to take a stand one way or the other. So as not to upset too many people?
Also, one question revealed the dreaded AI "hallucination" phenomenon, where a question deliberately citing a nonexistent past experiment was accepted as the truth followed by a comment on this nonexistent experiment.
Notably, toward the end, in its conversational manner the AI even asked Tymn for his own opinions on the subject.
Here are 3 sample questions and answers from the extensive "interview":
From the first sample response which you quoted:
Quote:"However, these experiences can often be explained through neurological and psychological factors, such as brain activity during trauma or altered states of consciousness."
That's clearly inaccurate at best or simply false. Certainly attempts are made to dismiss evidence using those sorts of arguments, but that is not the same as being able to explain them. There are gaps in the logical trail leading from an assertion to concluding that such an assertion is a valid and appropriate explanation.
It seems to me to be regurgitating sceptical opinion there, rather than being at all even-handed.
(2024-11-06, 10:07 AM)Typoz Wrote: From the first sample response which you quoted:
That's clearly inaccurate at best or simply false. Certainly attempts are made to dismiss evidence using those sorts of arguments, but that is not the same as being able to explain them. There are gaps in the logical trail leading from an assertion to concluding that such an assertion is a valid and appropriate explanation.
It seems to me to be regurgitating sceptical opinion there, rather than being at all even-handed.
Yes, this is a somewhat biased, skeptical response, and in unbiased reality only a small minority of NDEs are normally explainable. However, reading this AI's skeptical response, reasonably and logically it deliberately leaves out the significant number of veridical NDEs which do leave strong evidence that the NDEr actually experienced an OBE in which his what could be termed mobile center of consciousness separated from his brain and body and made later confirmed by independent investigators detailed veridical observations either of his body and rescusitation medical personnel working on his body while his brain was disfunctional due to the trauma, or perhaps of deceased loved ones occasionally not known at the time to be dead. And just the very fact that the NDEr experienced realer-than-real enhanced consciousness during the period where his brain was dysfunctional is strong evidence that the mind is not generated by the physical brain as claimed by materialistic neurology.
Of course we could interpret the response as a materialistic "promissory note" implying and promising that of course eventually conventional explanations will be found for all NDEs, but I don't think this material reads that way.
(This post was last modified: 2024-11-06, 08:53 PM by nbtruthman. Edited 2 times in total.)
(2024-10-14, 02:41 PM)Typoz Wrote: There was a twitter thread asking the question,
"Can Large Language Models (LLMs) truly reason?"
That link led me to only a single tweet - I couldn't work out how to view the full thread. Clearly, I spend very little time on Twitter.
(2024-10-14, 02:41 PM)Typoz Wrote: which discussed this paper:
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
I gave it a read. It's very interesting, and it's got me re-reevaluating the idea I was prompted to reevaluate here on learning of the strong reasoning performance of ChatGPT o1, that idea being that neural networks are insufficient to reach artificial general intelligence and that additional ad hoc modules are required.
(2024-11-06, 04:02 PM)nbtruthman Wrote: reasonably and logically it deliberately leaves out the significant number of veridical NDEs which do leave strong evidence
Is this what you mean to write, NB? I ask because it doesn't seem reasonable and logical to me to leave out strong evidence.
AI hallucinations caused artificial intelligence to falsely describe these people as criminals
By Anna Kelsey-Sugg and Damien Carrick for ABC News on 4 November, 2024.
Quote:In the US, a similar action is currently proceeding.
It involves a US radio host, Mark Walters, who ChatGPT incorrectly claimed was being sued by a former workplace for embezzlement and fraud. Walters is now suing OpenAI in response.
"He was not involved in the case … in any way," says Simon Thorne, a senior lecturer in computer science at Cardiff School of Technologies, who has been following the embezzlement case.
Mr Walters' legal case is now up and running, and Dr Thorne is very interested to see how it plays out — and what liability OpenAI is found to have.
The legal implications of LLMs continue to unfold.
"But what lies into the artificiality of the machine-made art? The machine takes on the visible elements that compose an image and can articulate them into ”new” images that are familiar (but oddly strange in expression) and compliment the fast need for gratification. There is an empty place at the core of it that soon gets filled up with residuals like commercial interest that more and more people mistake for beauty. Essentially AI artwork comunicates the soulessnes of the digital world when it is mistaken for a creation, in the context of mass production. Production is not creation and a product is not an artefact. If that was not so, factories would have create masterpieces in the industrial age, as much as the AI does nowadays."
-Subotai Ulagh
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'
- Bertrand Russell
ChatGPT, Meta and Google generative AI should be designated 'high-risk' under new laws, bipartisan committee recommends
By Jake Evans for ABC News on 26 November, 2024.
Quote:The committee also determined multinational tech companies operating in Australia had committed "unprecedented theft" from creative workers.
It said developers of AI products should be forced to be transparent about the use of copyrighted works in their training datasets, and that the use of that work be appropriately licensed and paid for.
Quote:"If the widespread theft of tens of thousands of Australians' creative works by big multinational tech companies, without authorisation or remuneration, is not already unlawful, then it should be."
It said the notion put forward by Google, Amazon and Meta that their "theft" of Australian content was for the greater good because it ensured Australian culture was represented in AI output was "farcical".
It'll be interesting to see what, if anything, comes of this senate committee's recommendations.
|