AI megathread

195 Replies, 7743 Views

OpenAI Employees Say Firm's Chief Scientist Has Been Making Strange Spiritual Claims

Victor Tangermann

Quote:The entire situation is baffling. OpenAI, which is quickly approaching a $90 billion valuation, has been thrown into a deepening crisis by its overarching non-profit arm.

In the meantime, we're getting a closer than ever peek at what makes OpenAI's power players tick. Case in point, Sutskever has established himself as an esoteric  "spiritual leader" at the company, per The Atlantic, cheering on the company's efforts to realize artificial general intelligence (AGI), a hazy and ill-defined state when AI models have become as or more capable than humans — or maybe, according to some, even godlike. (His frenemy Altman has long championed attaining AGI as OpenAI's number one goal, despite warning about the possibility of an evil AI outsmarting humans and taking over the world for many years.)

Still, the Atlantic's new details are bizarre, even by the standards of tech industry wackadoos.

"Feel the AGI! Feel the AGI!" employees reportedly chanted, per The Atlantic, a refrain that was led by Sutskever himself.

The chief scientist even commissioned a wooden effigy to represent an "unaligned" AI that works against the interest of humanity, only to set it on fire.

In short, instead of focusing on meaningfully advancing AI tech in a scientifically sound way, some board members sound like they're engaging in weird spiritual claims.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Typoz
(2023-11-23, 05:05 PM)nbtruthman Wrote: On being thus asked to explicate on the irony of Harris's talk, ChatGPT4 answered as follows (Dembski chose not to comment):

Often the AI system is next asked to present an opposing viewpoint. I wonder to what extent someone might consider the opposing view just as interesting.

Like Sciborg, I noticed some holes in the presentation. I'm tending much more towards my previous assessments, that ChatGPT4 is able to present some text which looks as though it might be correct. In general, it might sometimes present a correct response, but it is this appearance of correctness rather than actual correctness which seems to be the hallmark.
[-] The following 1 user Likes Typoz's post:
  • nbtruthman
(2023-11-23, 05:10 PM)Sciborg_S_Patel Wrote: Those aren't really good arguments against Harris' argument, which largely turns on determinism being a logical necessity...something I would argue has never been shown, least of all by Harris...

Not that I want to take ChatGPT4's position in this, but it seems to me that it correctly limited itself to the actual query - it was asked for an analysis of the irony of the statement, not what may be more serious flaws in it. The question was, exactly, "This atheist, whom we’ll call Sam, begins his lecture with the following statement: “Tonight, I want to try to convince you that free will is an illusion.” Please comment at length on the irony of this statement."

And it seems to me that the self-contradicting nature of the statement and its irony constitute probably the best arguments against it. Why are these arguments not good?
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
(2023-11-23, 11:18 PM)nbtruthman Wrote: And it seems to me that the self-contradicting nature of the statement and its irony constitute probably the best arguments against it. Why are these arguments not good?

I think the argument claiming the statement is ironic depends on the idea that convincing someone of something assumes there is a choice to be made that somehow refutes the assumption that people lack free will.

But I think it's arguable that even in a deterministic world one could make an argument in the hopes it convinces someone else. I guess one could say all logical arguments involve reasoning and are grounded in the feeling/quale of logical soundness...that perhaps gets one partially away from Harris' deterministic assumptions but not sure it's ironic since one needs to show determinism and logical reasoning are incompatible.

Irony would, as I understand it, be an obvious contradiction...like if Harris said, "We need to decide, as a society, what to do with the fact that determinism is true and free will is illusory."

(Note that I don't think determinism is true since I think all causation is mental causation.)
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2023-11-24, 03:21 AM by Sciborg_S_Patel. Edited 1 time in total.)
This is slightly off-topic, but I have just been re-reading a novel about artificial consciousness run on a computer. If AI were ever to equal or supersede human consciousness, it would be a consciousness running on a computer.

Science Fiction isn't everyone's cup of tea, so I'l also explain a bit about why I think this book is useful:

https://www.amazon.co.uk/Permutation-Cit...B004JHY84E

Greg Egan is a materialist, and his thoughts reflect his idea of the weird consequences of computer consciousness.

The book explores the idea that analternative physics (sporting its own alternative biochemistry and life forms) can be based on higher dimensional forms of cellular automata. This means you can have evolution (good old Darwinian evolution) of totally digital lifeforms inside a computer. At the same time people's brains can be copied into computer form, to give them a kind of mortality. However these Copies become worried that ordinary people might decide to switch the computers off, which will kill all the Copies.

However, the idea is that these cellular automata (CA)are mathematical objects, and it might be possible (i.e. it happens in the book) to effectively stuff Copies and their favourite software into the initial state of a CA, and their lives would continue - embedded in the mathematical theory - without any requirement for more actual hardware!

This idea is strange because it would be a sort of non-materialist afterlife. We seem to have an ultra-materialist novel ending up in a non-materialist immortality! Since I am not a materialist, I think the logic of this novel must break down somewhere.

David
(This post was last modified: 2023-11-24, 11:47 AM by David001. Edited 1 time in total.)
[-] The following 2 users Like David001's post:
  • Sciborg_S_Patel, Typoz
As I understand it, ChatGPT-4 requires you to buy tokens to use it, whereas the earlier version is free. Is that right, and if so does anyone here experiment with version 4?

David
(2023-11-24, 11:42 AM)David001 Wrote: If AI were ever to equal or supersede human consciousness, it would be a consciousness running on a computer.

Consciousness and AI are of different types, there is no formula to transform one into the other.

Quote:Since I am not a materialist, I think the logic of this novel must break down somewhere.

In my view it breaks down very early on by (incorrectly) postulating that
ai = c
where ai is artificial intelligence and c is consciousness. The rest may be very interesting but it is fantasy rather than fiction I think.
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, nbtruthman
(2023-11-24, 11:44 AM)David001 Wrote: As I understand it, ChatGPT-4 requires you to buy tokens to use it, whereas the earlier version is free. Is that right, and if so does anyone here experiment with version 4?

David

Yes CHATGPT-4 is a commercial product. I have a license for it. It's a significant improvement to the previous versions and there's also GPT-5 version in development now one can try out.
(This post was last modified: 2023-11-24, 02:34 PM by sbu.)
(2023-11-24, 03:20 AM)Sciborg_S_Patel Wrote: I think the argument claiming the statement is ironic depends on the idea that convincing someone of something assumes there is a choice to be made that somehow refutes the assumption that people lack free will.

But I think it's arguable that even in a deterministic world one could make an argument in the hopes it convinces someone else. I guess one could say all logical arguments involve reasoning and are grounded in the feeling/quale of logical soundness...that perhaps gets one partially away from Harris' deterministic assumptions but not sure it's ironic since one needs to show determinism and logical reasoning are incompatible.

Irony would, as I understand it, be an obvious contradiction...like if Harris said, "We need to decide, as a society, what to do with the fact that determinism is true and free will is illusory."

(Note that I don't think determinism is true since I think all causation is mental causation.)

A person making a conscious choice to advocate for there being absolutely for certain no free will has still, by his conscious subjective self acting as such an agent, at least probably contradicted his own claim, because the nature of this conscious self acting as an agent is a mystery and therefore cannot be assumed to be deterministic. This contradicts the opening request statement which claims certainty that there is no free will. This conscious self-aware being senses himself as a free agent, and there is no compelling reason to deny him this, other than to assume that this feeling is an illusion. But then the inevitable question arises, who or what is entertaining this illusion? Endless regression, so this last is probably an invalid notion.
[-] The following 1 user Likes nbtruthman's post:
  • Typoz
(2023-11-24, 12:08 PM)Typoz Wrote: Consciousness and AI are of different types, there is no formula to transform one into the other.


In my view it breaks down very early on by (incorrectly) postulating that
ai = c
where ai is artificial intelligence and c is consciousness. The rest may be very interesting but it is fantasy rather than fiction I think.

I also believe that computer consciousness is impossible, but new developments in LLM AI seem to be making a few dents in it. Apparently the developers of ChatGPT4 are getting closer to demonstrating general artificial computer intelligence, which would be at the very least much closer to perfectly mimicking a conscious entity. A perfect mimic would be impossible to distinguish by behavior from a real human being (except perhaps for paranormal phenomena), then making it at least theoretically able to manifest dangerous all too human antisocial and destructive actions.

New AI threat looming?

https://www.reuters.com/technology/sam-a...023-11-22/

Quote:Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said.
........................................
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
.........................................
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)