A scary chat with ChatGPT about latest NDE account in NDE thread

105 Replies, 3265 Views

Mod here.

Posts containing more explicit socio-political content have been moved to the opt-in forums, as they've garnered continued reaction.

Please be mindful of the forum rules. (see rule 9).
[-] The following 1 user Likes Ninshub's post:
  • ersby
(2023-03-19, 02:12 PM)Ninshub Wrote: Mod here.

Posts containing more explicit socio-political content have been moved to the opt-in forums, as they've garnered continued reaction.

Please be mindful of the forum rules. (see rule 9).

Ninshub,

I think it would really help if you have us a link to the moved posts, please.

David
[-] The following 1 user Likes David001's post:
  • Ninshub
Sorry about that David.

Here it is.
(2023-03-12, 04:57 PM)sbu Wrote: I'm currently investigating the capabilities of ChatGPT - an AI causing a lot of fuzz about at the moment. I assume everybody have heard about it already. I decided to subject it to the last NDE report in the NDE thread NDE's (166) (psiencequest.net). Here's a screendump of how that went. I think this is scary technology. Imagine how much misinformation this technology is going to cause to unsceptical users.
Well, ChatGPT does have a disclaimer saying it'll be inaccurate and misleading. I tried to match it's conclusions to the original pdf which contains excerpts of two other papers. I wasn't able to, apart from maybe the ChatGPT throwing a title of an unrelated reference into the mix ("The impact of religiosity on quality of life"). Did you click on the references it gave? It would be interesting to see what went wrong and where the other papers came from.
(2023-03-20, 08:17 AM)ersby Wrote: Well, ChatGPT does have a disclaimer saying it'll be inaccurate and misleading. I tried to match it's conclusions to the original pdf which contains excerpts of two other papers. I wasn't able to, apart from maybe the ChatGPT throwing a title of an unrelated reference into the mix ("The impact of religiosity on quality of life"). Did you click on the references it gave? It would be interesting to see what went wrong and where the other papers came from.

I have noticed that in general you can't google the exact wording of a given chatGPT response. I know the Bing variant works slightly differently from the one available on OpenAI's homepage (Bing has access to the Internet), But I think for both software entities the responses are not given by citing any particular reference. Instead the response is 'baked' into the neural network itself. 

I had another chat where I asked it about it's interpretation of Depeche's Mode's "Never let me down again". Interestingly (again) it delivered a response that at first seemed semsible, well formulated, something any non-skeptical person immediately would go with. Also this time I tried to google the exact wording of the response and got zero hits. However just with the NDE article I don't think the lyrics of the song supported ChatGPT's interpretation of the text. When you applied human reason it did not (yet) make sense.

Even though these experiments with ChatGPT fails under close scrutiny I must admit this technology has thrown me into a deep personal crisis. I suddenly realise that we most likely are only a fews years away from completely solving "the easy problem" of consciousness. We will be able to do a John Searle "chinese room" experiment without being able to tell if the entity within the room is a consciousness or not. I even considered letting it translate this piece of danish/english into correct english - but now I'm not so you can know you are dealing with a real personal being  Smile

Are there a hard problem of consciousness? I used to believe so - but suddenly I'm not so certain anymore. Not all philosophers are conviced either..
[-] The following 1 user Likes sbu's post:
  • Typoz
(2023-03-20, 09:08 PM)sbu Wrote: Even though these experiments with ChatGPT fails under close scrutiny I must admit this technology has thrown me into a deep personal crisis. I suddenly realise that we most likely are only a fews years away from completely solving "the easy problem" of consciousness.

Nothing in ChatGPT seems close to solving anything with regard to consciousness?

Just look at driverless cars and all the issues when you try to take this out of the toy box and into some real world applied technology.

To me if you could see what is happening under the hood when the machine "learning" program is coming up with the response the magic trick would lose its lustre. There's a reason the whole thing is being marketed as a black box.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-20, 10:33 PM by Sciborg_S_Patel.)
[-] The following 5 users Like Sciborg_S_Patel's post:
  • Typoz, tim, Valmar, Ninshub, Max_B
(2023-03-20, 10:32 PM)Sciborg_S_Patel Wrote: Nothing in ChatGPT seems close to solving anything with regard to consciousness?

Just look at driverless cars and all the issues when you try to take this out of the toy box and into some real world applied technology.

To me if you could see what is happening under the hood when the machine "learning" program is coming up with the response the magic trick would lose its lustre. There's a reason the whole thing is being marketed as a black box.

It's a big bubble... bucket loads of fictitious cash thrown at it to hype things up for the benefit of a few people who do, and loads of people who don't, understand it... whilst the speculators escape with the real cash...

I'm sure for things with limited degrees of freedom, this current level of stuff will be useful - like analyzing an X-Ray, or picture of a retina, or bacteria in a petri dish etc. We've been using the output of simple neural-nets for years, to help mix paint colours.

I can see why they this big-'learning' works for 2 dimensional problems. Flat things, like copying a flat painting, or a photo, relatively well... but it becomes a big problem as you go 3 dimensional, unless you constrain things drastically... but add in another dimension - time... jeez no way... they can't process temporally (yet). And the stuff I've seen so far (art and writing) is dead and lifeless... you can almost feel that access to the works creator through the unique pattern that they would normally have created, is absent.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
[-] The following 2 users Like Max_B's post:
  • Ninshub, Sciborg_S_Patel
(2023-03-20, 11:57 PM)Max_B Wrote: It's a big bubble... bucket loads of fictitious cash thrown at it to hype things up for the benefit of a few people who do, and loads of people who don't, understand it... whilst the speculators escape with the real cash...

I'm sure for things with limited degrees of freedom, this current level of stuff will be useful - like analyzing an X-Ray, or picture of a retina, or bacteria in a petri dish etc. We've been using the output of simple neural-nets for years, to help mix paint colours.

I can see why they this big-'learning' works for 2 dimensional problems. Flat things, like copying a flat painting, or a photo, relatively well... but it becomes a big problem as you go 3 dimensional, unless you constrain things drastically... but add in another dimension - time... jeez no way... they can't process temporally (yet). And the stuff I've seen so far (art and writing) is dead and lifeless... you can almost feel that access to the works creator through the unique pattern that they would normally have created, is absent.

Yeah the art went from something that seemed a threat to real artists to a bad joke in a matter of months IMO. I still do think artists should mount lawsuits b/c I was able to get what was obviously a copyrighted piece of art within 5 minutes of playing around...there's a paper on retrieving original images and at least one lawsuit so far.

I agree there is definitely useful things machine "learning" can do, but as you say the technology has definite limits.

ChatGPT also doesn't seem that hard to trick:

How to trick OpenAI's ChatGPT

How [to] Hack ChatGPT?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-21, 01:07 AM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 3 users Like Sciborg_S_Patel's post:
  • tim, Max_B, Ninshub
(2023-03-20, 09:08 PM)sbu Wrote: I have noticed that in general you can't google the exact wording of a given chatGPT response. I know the Bing variant works slightly differently from the one available on OpenAI's homepage (Bing has access to the Internet), But I think for both software entities the responses are not given by citing any particular reference. Instead the response is 'baked' into the neural network itself. 

I had another chat where I asked it about it's interpretation of Depeche's Mode's "Never let me down again". Interestingly (again) it delivered a response that at first seemed semsible, well formulated, something any non-skeptical person immediately would go with. Also this time I tried to google the exact wording of the response and got zero hits. However just with the NDE article I don't think the lyrics of the song supported ChatGPT's interpretation of the text. When you applied human reason it did not (yet) make sense.

Even though these experiments with ChatGPT fails under close scrutiny I must admit this technology has thrown me into a deep personal crisis. I suddenly realise that we most likely are only a fews years away from completely solving "the easy problem" of consciousness. We will be able to do a John Searle "chinese room" experiment without being able to tell if the entity within the room is a consciousness or not. I even considered letting it translate this piece of danish/english into correct english - but now I'm not so you can know you are dealing with a real personal being  Smile

Are there a hard problem of consciousness? I used to believe so - but suddenly I'm not so certain anymore. Not all philosophers are conviced either..
Only in a few years? afaik ChatGPT is not something that revolutionary technically. What you said is not new, in fact people attributing consciousness to chatbots happened multiple times before it.

About personal experiences, my own thinkering with the bot made me even more skeptical about the AI craze and implications. Begins impressive but with further experiments fails at basic things.
If AGI is even possible, we are centuries behind it.
[-] The following 3 users Like quirkybrainmeat's post:
  • Sciborg_S_Patel, tim, Ninshub
(2023-03-21, 01:45 AM)quirkybrainmeat Wrote: Only in a few years? afaik ChatGPT is not something that revolutionary technically. What you said is not new, in fact people attributing consciousness to chatbots happened multiple times before it.

About personal experiences, my own thinkering with the bot made me even more skeptical about the AI craze and implications. Begins impressive but with further experiments fails at basic things.
If AGI is even possible, we are centuries behind it.

I think we are on an exponential curve here. Give it 10 more years.

  • View a Printable Version


Users browsing this thread: 1 Guest(s)