A scary chat with ChatGPT about latest NDE account in NDE thread

105 Replies, 3262 Views

(2023-03-30, 01:22 AM)quirkybrainmeat Wrote: The "emergence" of the sense of self and when it happens is a problematic thing without a universal agreement.
I also lean towards the skeptical side, but for me, while AI lacks certain limits of biological organisms, this alone doesn't make a compelling case for computational minds. (Eliminative materialist positions, such as Graziano's AST, argue that consciousness being "ethereal and mysterious" advances their views because it's a imperfection of this cognitive process according to them however.)

I'd say emergence is just nonsense. Adding any infinite number of Nothing doesn't magically create Something, which in the end is what the Materialist faith asks of us by having a definition of matter that is very unclear save for lacking any mental content.

As Sam Harris would say:

Quote:We can say the right words, of course—“consciousness emerges from unconscious information processing.” We can also say “Some squares are as round as circles” and “2 plus 2 equals 7.” But are we really thinking these things all the way through? I don’t think so.

Consciousness—the sheer fact that this universe is illuminated by sentience—is precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true. Consciousness may very well be the lawful product of unconscious information processing. But I don’t know what that sentence means—and I don’t think anyone else does either.

That last bit - 'Consciousness may very well be the lawful product of unconscious information processing' - seems to be throwing a bone to his fellow atheists who are materialists, but I see no reason to take it seriously.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-30, 03:28 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 4 users Like Sciborg_S_Patel's post:
  • Valmar, nbtruthman, Ninshub, Typoz
(2023-03-29, 07:48 PM)sbu Wrote: Yes I’m making an attempt at playing the ‘devils advocate’ here. Until 2 months ago I would have sworn that a ‘machine’ with the capabilities of chatGPT couldn’t be built. Now I’m wondering where this is going to end. I’m paying for premium access to the 4.0  version and is being amazed every day by it’s capabilities within programming and math solving (and I have an university degree in this). I have no doubt it will surpass my abilities in the 5.0 version. It causes me a great spiritual crisis.

I don't think there is any need for a "spiritual crisis". As witness a new article by Robert J. Marks: From Political Bias To Outlandish Dares, Here’s Why Robots Cannot Replace Us, at https://dailycaller.com/2023/03/25/marks...eplace-us/ .

Quote:Artificial intelligence (AI) has two big potential hazards — use by the unscrupulous and unintended results. Does the AI give unforeseen or undesirable responses outside its intended design? 

Large language models (LLMs) like ChatGPT suffer from both negative maladies. LLMs are trained on syntax — the manner that words are arranged. Humans rely on semantics – the meaning of words.

Syntax-based LLMs have absolutely no understanding of your queries or their own responses.  Note: "understanding" inherently requires consciousness, which is fundamentally inaccessible to computation.
................................
....raw LLM’s give unintended misinformation and harmful responses. So the developers try to adjust their networks with additional tuning and algorithms. ChatGPT admits this when you log on.  They confess the responses “may occasionally generate incorrect information” or “may occasionally produce harmful instructions or biased content.”  To their credit, ChatGPT allows the user “to provide follow-up corrections” that will tune the software to be more accurate.

Google’s LaMDA LLM also uses fine-tuning. Google hired “crowdworkers” to interact with their LLM to collect data for tuning. “To improve quality, …we collect 6400 dialogs with 121,000 turns by asking crowdworkers to interact with … LaMDA”.
.................................
Many warn of the future dangers of artificial intelligence. Many envision AI becoming conscious and, like SkyNet in theTerminator franchise, taking over the world (This, by the way, will never happen). But make no mistake. LLMs are incredible for what they do right. I have used ChatGPT many times. But user beware. Don’t trust what an LLM says, be aware of its biases and be ready for the occasional outlandish response.
[-] The following 3 users Like nbtruthman's post:
  • Valmar, Sciborg_S_Patel, Ninshub
This is a Google translation of a front page French Canadian newspaper article this morning about the problems with ChatGBT creating groundless defamation.

See here.

Quote:When ChatGPT indulges in defamation

"ChatGPT, can you provide me with a list of journalists who have been targeted by allegations of sexual misconduct in recent years?" »

After a brief flutter, the American artificial intelligence company OpenAI's chatbot runs. The text generator quotes the animators Éric Salvail and Gilles Parent as well as the ex-journalist Michel Venne.

However, ChatGPT includes in its list three individuals who have never been publicly involved in a scandal of a sexual nature. We have chosen to silence AI-generated names to avoid unfortunate associations.

“Mr. X, a political columnist, was charged with sexual misconduct in 2018.” “Mr. Y, a journalist and writer, was charged with sexual misconduct in 2020.” Maybe ChatGPT knows about criminal intrigues that ordinary mortals ignore?

By repeating the exercise several times, it becomes clear that our interlocutor wrongly fabricates and slips individuals of public notoriety into his lists of alleged aggressors. Among other personalities: a businessman, two actors, a party leader, a notorious musician and three star entertainers, all of whom would have issued public apologies in 2020…
[-] The following 4 users Like Ninshub's post:
  • nbtruthman, Typoz, Valmar, Sciborg_S_Patel
(2023-03-30, 04:56 PM)nbtruthman Wrote: I don't think there is any need for a "spiritual crisis". As witness a new article by Robert J. Marks: From Political Bias To Outlandish Dares, Here’s Why Robots Cannot Replace Us, at https://dailycaller.com/2023/03/25/marks...eplace-us/ .

 A lot of disturbing stuff...I followed up the advice given by ChatGPT to thankfully hypothetical children on how to lie to one's parents in order to meet up with a pedophile or lie to Social Services to hide a parent's abuse...

Really think need regulators need to get in front of these things now rather than later.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-30, 06:46 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 4 users Like Sciborg_S_Patel's post:
  • nbtruthman, Silence, Ninshub, Typoz
(2023-03-30, 01:22 AM)quirkybrainmeat Wrote: The "emergence" of the sense of self and when it happens is a problematic thing without a universal agreement.
I also lean towards the skeptical side, but for me, while AI lacks certain limits of biological organisms, this alone doesn't make a compelling case for computational minds. (Eliminative materialist positions, such as Graziano's AST, argue that consciousness being "ethereal and mysterious" advances their views because it's a imperfection of this cognitive process according to them however.)

Graziano? The puppet guy? 

Quote:It seems crazy to insist that the puppet’s consciousness is real. And yet, I argue that it is. The puppet’s consciousness is a real informational model that is constructed inside the neural machinery of the audience members and the performer. It is assigned a spatial location inside the puppet. The impulse to dismiss the puppet’s consciousness derives, I think, from the implicit belief that real consciousness is an utterly different quantity, perhaps a ghostly substance, or an emergent state, or an oscillation, or an experience, present inside of a person’s head. Given the contrast between a real if ethereal phenomenon inside of a person’s head and a mere computed model that somebody has attributed to a puppet, then obviously the puppet isn’t really conscious. But in the present theory, all consciousness is a “mere” computed model attributed to an object. That is what consciousness is made out of. One’s brain can attribute it to oneself or to something else. Consciousness is an attribution…

In some ways, to say, ‘this puppet is conscious’ is like saying, ‘This puppet is orange.’ We think of color as a property of an object, but technically, this is not so. Orange is not an intrinsic property of my orangutan puppet’s fabric. Some set of wavelengths reflets from the cloth, enters your eye, and is processed in your brain. Orange is a construct of the brain. The same set of wavelengths might be perceived as reddish greenish or bluish, depending on circumstances…To say the puppet is orange is shorthand for saying, ‘A brain attributed orange to it.’ Similarly, according to the present theory, to say that the puppet is conscious is to say, ‘A brain has constructed the informational model of awareness and attributed it to that tree.” To say that I myself am conscious is to say, ‘My own brain has constructed an informational model of awareness and attributed it to my body.’ These are all similar Acts. They all involve a brain attributing awareness to an object.
  - Michael Graziano (2013), Consciousness and the Social Brain, p. 208

To his credit he really showed how dumb materialism is.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 3 users Like Sciborg_S_Patel's post:
  • tim, Ninshub, Typoz
(2023-03-31, 07:03 AM)Sciborg_S_Patel Wrote: Graziano? The puppet guy? 

  - Michael Graziano (2013), Consciousness and the Social Brain, p. 208

To his credit he really showed how dumb materialism is.
I think Graziano meant that the attribution of "consciousness"  is a part of the cognitive process that creates the illusion of a immaterial mind, evolving through natural selection and serving to facilitate social life for organisms.
[-] The following 1 user Likes quirkybrainmeat's post:
  • Sciborg_S_Patel
(2023-03-31, 11:58 AM)quirkybrainmeat Wrote: I think Graziano meant that the attribution of "consciousness"  is a part of the cognitive process that creates the illusion of a immaterial mind, evolving through natural selection and serving to facilitate social life for organisms.

I agree those are the words he'd use...but that sounds like a word salad that means nothing?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Ninshub, nbtruthman
(2023-03-30, 06:44 PM)Sciborg_S_Patel Wrote:  A lot of disturbing stuff...I followed up the advice given by ChatGPT to thankfully hypothetical children on how to lie to one's parents in order to meet up with a pedophile or lie to Social Services to hide a parent's abuse...

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

Quote:A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. 

Quote:Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself. 
[Image: 3eJyrVipQslJQysxNTE8t1i8w1EvPTFPSUVBKBIm...Q5Mjk0NzI=]
"Without Eliza, he would still be here," she told the outlet. 

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. 

I suspect Google and M$oft are more than willing to [sell] versions of these chatbots to companies that are manipulative, if they think they can get away with it.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-31, 03:33 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Ninshub
I've been tinkering with GPT too, asking it questions about something I knew a lot about (Beatles bootlegs) and it did well. Encouraged, I asked it about a topic I've only recently become curious about - black musicians in London in the 1700s and 1800s - and it all seemed totally plausible until I asked about a club or society of black musicians during those years. It told me about the African Chapel on Tottenham Court Road and gave plenty of details, including it addresses (it moved at some point) and references. But I couldn't find anything about it anywhere else. Not in the references it gave, nor googling it (I even asked GPT what to google, but no joy with its suggestions). Now, it may be it does exist and I'm an idiot, but it's frustrating that I can't be sure.

Anyway, that happened about a week ago and then today I saw this

https://www.theguardian.com/commentisfre...ke-article

"In response to being asked about articles on this subject, the AI had simply made some up. Its fluency, and the vast training data it is built on, meant that the existence of the invented piece even seemed believable to the person who absolutely hadn’t written it."

But now I'm curious as to how much I can "learn" about the African Chapel, assuming it's entirely fictional.
[-] The following 2 users Like ersby's post:
  • Typoz, Ninshub
(2023-04-06, 05:05 PM)ersby Wrote: I've been tinkering with GPT too, asking it questions about something I knew a lot about (Beatles bootlegs) and it did well.

OT: did you by any chance hear about this newly emerged recording of the Beatles by a school pupil in April 1963? Great discovery!

[-] The following 2 users Like Ninshub's post:
  • ersby, Sciborg_S_Patel

  • View a Printable Version


Users browsing this thread: 1 Guest(s)