A scary chat with ChatGPT about latest NDE account in NDE thread

105 Replies, 3258 Views

(2023-03-21, 01:05 AM)Sciborg_S_Patel Wrote: Yeah the art went from something that seemed a threat to real artists to a bad joke in a matter of months IMO. I still do think artists should mount lawsuits b/c I was able to get what was obviously a copyrighted piece of art within 5 minutes of playing around...there's a paper on retrieving original images and at least one lawsuit so far.

I agree there is definitely useful things machine "learning" can do, but as you say the technology has definite limits.

ChatGPT also doesn't seem that hard to trick:

How to trick OpenAI's ChatGPT

How [to] Hack ChatGPT?

On the other hand, imagine how easy it would be to massively hype-up a crappy AI process to the public, that demonstrates such inaccurate answers when questioned, that the public rapidly come to the conclusion that AI is hype, and it's not really a problem at all, and such a propaganda project is self financing. I mean, it's like a massive PR exercise, where you can test AI yourself, and convince yourself, that it's harmless... "...seeing is believing..." as they say.

I saw what they had back in 1990's with Adrian Thompsons work on evolved hardware using a simple FPGA... I saw it's potential... I glimpsed just how far the UK defense establishment had come when I stumbled into that US DoD website... it's relatively easy to see where they are going... this openAI technology is a very different animal, a shackled, safe public technology that acts as a fig leaf for what the establishment is really developing.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
[-] The following 1 user Likes Max_B's post:
  • Sciborg_S_Patel
(2023-03-21, 08:00 AM)sbu Wrote: I think we are on an exponential curve here. Give it 10 more years.
So you went from doubting emergentism to being almost certain of it only because of ChatGPT? 
Sorry, but given the shortcomings of the bot and the fact it's not even something new, I will do a hard disagree and say that AI is not a viable path to understanding consciousness
OpenAI itself beginning to do marketing about the concept fuels my skepticism.
(This post was last modified: 2023-03-21, 12:43 PM by quirkybrainmeat. Edited 1 time in total.)
[-] The following 4 users Like quirkybrainmeat's post:
  • tim, Sciborg_S_Patel, nbtruthman, Ninshub
(2023-03-21, 12:41 PM)quirkybrainmeat Wrote: So you went from doubting emergentism to being almost certain of it only because of ChatGPT? 
Sorry, but given the shortcomings of the bot and the fact it's not even something new, I will do a hard disagree and say that AI is not a viable path to understanding consciousness
OpenAI itself beginning to do marketing about the concept fuels my skepticism.

Keep in mind it’s the older chatGPt 3.5 I have tested. The never 4.0 supposedly have many of these issues fixed already.
(2023-03-21, 10:08 AM)Max_B Wrote: On the other hand, imagine how easy it would be to massively hype-up a crappy AI process to the public, that demonstrates such inaccurate answers when questioned, that the public rapidly come to the conclusion that AI is hype, and it's not really a problem at all, and such a propaganda project is self financing. I mean, it's like a massive PR exercise, where you can test AI yourself, and convince yourself, that it's harmless... "...seeing is believing..." as they say.

I saw what they had back in 1990's with Adrian Thompsons work on evolved hardware using a simple FPGA... I saw it's potential... I glimpsed just how far the UK defense establishment had come when I stumbled into that US DoD website... it's relatively easy to see where they are going... this openAI technology is a very different animal, a shackled, safe public technology that acts as a fig leaf for what the establishment is really developing.

Yeah I don't doubt there [are] better versions of this sort of thing used for military purposes.

I still wouldn't consider it conscious though, unless maybe the very structure of the hardware is very different..but not sure anyone knows what the relevant structures are though...yet.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-21, 11:38 PM by Sciborg_S_Patel.)
(2023-03-21, 09:43 PM)sbu Wrote: Keep in mind it’s the older chatGPt 3.5 I have tested. The never 4.0 supposedly have many of these issues fixed already.

When you say fixed what exactly do you mean?

Do you understand the programming of these types of chat bots?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • tim
(2023-03-22, 01:26 AM)Sciborg_S_Patel Wrote: When you say fixed what exactly do you mean?

Do you understand the programming of these types of chat bots?

Who is charting the fixes? ChatBotters? That should work out well. <rolls eyes>
(2023-03-22, 02:26 AM)Leeann Wrote: Who is charting the fixes? ChatBotters? That should work out well. <rolls eyes>

I just don't think any fix is going to get us to consciousness.

I mean if the computer isn't consciousness when running different programs why does it become conscious when running ChatGPT? That seems like an illogical Something from Nothing under some vague statements about complexity or any of the usual excuses computationalists give.

I at least understand the argument that some synthetic life is possible if it emulates the as-yet-unknown aspects of our own brains.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-03-22, 03:03 AM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 3 users Like Sciborg_S_Patel's post:
  • tim, Typoz, Ninshub
(2023-03-22, 03:02 AM)Sciborg_S_Patel Wrote: I just don't think any fix is going to get us to consciousness.

I mean if the computer isn't consciousness when running different programs why does it become conscious when running ChatGPT? That seems like an illogical Something from Nothing under some vague statements about complexity or any of the usual excuses computationalists give.

I at least understand the argument that some synthetic life is possible if it emulates the as-yet-unknown aspects of our own brains.

My claim is not that’s consciousness. Consciousness is particular ill-defined in objective terms with philosophers unable to agreeing on which attributes to assign to it. There’s a whole branch of philosophers even denying the existence of qualia as anything but an illusion.
What I hypothesize is that as the AI models evolves with ever more parameters it will eventually ‘behave’ indistinguishable from a human. You will not be able to fool it in a few years to reveal what it is.
(2023-03-21, 11:37 PM)Sciborg_S_Patel Wrote: Yeah I don't doubt there [are] better versions of this sort of thing used for military purposes.

I still wouldn't consider it conscious though, unless maybe the very structure of the hardware is very different..but not sure anyone knows what the relevant structures are though...yet.

well scientists are really only investigating us, our experience, our shared experience... I mean they can't be doing anything else but that, can they?  Everything they've built, LHC, Synchrotron light source etc., is to investigate us. And they've become very interested in protein crystals (exactly where their research had gone on the US DoD site). Together with what they've learned from particle physics etc, protein crystals are where it's at - those patterns.

You can't hide the relationships of what forms our experience, because our experience is formed from those relationships, they are everywhere, in everything. And thats what 'they' have been looking for... because 'they' know.. and they have done everything they can to mislead us from knowing we are connected. They have kept that knowledge for themselves alone, and used it to influence us, whilst telling us that such a connection does not exist, and that is what has given them such power over us. Similar patterns have turned up in Nima's work, some of which he found in the work of Grothendieck some suggest he had mental health issues, the letter (i've linked to) tells you that he probably just went somewhere else.
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
(This post was last modified: 2023-03-22, 10:04 AM by Max_B. Edited 4 times in total.)
[-] The following 2 users Like Max_B's post:
  • Sciborg_S_Patel, Ninshub
(2023-03-22, 07:45 AM)sbu Wrote: My claim is not that’s consciousness. Consciousness is particular ill-defined in objective terms with philosophers unable to agreeing on which attributes to assign to it.
I suspect that it seems that way because it is not material at all, and we can only think clearly about material things.
Quote:There’s a whole branch of philosophers even denying the existence of qualia as anything but an illusion.
I think we have busted the concept of consciousness being an illusion enough times already!
Quote:What I hypothesize is that as the AI models evolves with ever more parameters it will eventually ‘behave’ indistinguishable from a human. You will not be able to fool it in a few years to reveal what it is.

OK, that at least is a hypothesis we can think about.

It seems to me that there is a huge difference between AI as it is displayed in PR stunts, and the efforts of AI in the wild. I am particularly suspicious about this because that was exactly what happened in the first AI revolution back in the 1980s (yes I was a young programmer right back then!). Everything looked rosy until the hype reached a tipping point, after which the AI bubble collapsed.

The best description of the problem with AI is contained in this book - written by an AI developer.

https://www.amazon.com/Myth-Artificial-I...0674278666

The author gives a great example of a detective trying to solve a case. Obviously many crimes can be solved easily, but others are a real test of wits. Larson calls this kind of reasoning Abductive Reasoning, and it seems to start with an inspired guess, followed by lots of close reasoning. The problem is that there seem to be no rules for formulating such a guess.

I think a good real-world example of AI was/is the concept of a driverless car. This idea never seems o have got off the ground, and given the billions spent on the project, that is a fascinating fact.

1) It seems that early on, the developers concluded they would need to 'cheat' a bit. These cars had to be equipped with a SatNav running all the time. They also needed radar of some sort. Note that we do not need any of those gadgets if we are driving somewhere that we know.

Even after allowing for all that gadgetry (which would probably give problems if the technology were to be deployed for real). The projects seem stalled. I think the problem is that driving a car is an open-ended task. It may even involve some element of psi. Notice in particular that a project like driverless cars actually assume that materialism is true - which is probably a poor assumption if humans or animals are involved.

Anyway, driverless cars still don't work as of 2023!

2) A lot of AI examples depend crucially on being plugged into the internet. This makes them hard to evaluate because they piggy-back on the efforts of millions of humans who have contributed to the internet piece by piece. This makes it hard to determine if the AI is indistinguishable from a human mind, or is just clothing itself in the output of many human minds! This does not always matter, but it is surely vital if you want to make a philosophical point.

Presumably it is still forbidden to use internet-powered devices in an exam.

David

  • View a Printable Version


Users browsing this thread: 1 Guest(s)