AI megathread

195 Replies, 7774 Views

This is thread to discuss artificial intelligence (AI) in general, given the interest shown in the several existing more specific threads on the topic.

A couple of ways in which AI is topical on Psience Quest are with respect to:
  1. The light it sheds on the relationship between consciousness and intelligence: to what extent the latter depends on, or can exist in the absence of, the former.
  2. The question of whether an instance of AI could ever be or become conscious.
Otherwise, AI is not strictly topical, which is why I've placed this thread under "Other Topics": I don't want to limit its remit. Other aspects up for discussion are, but are not limited to:
  1. The technology itself: the various approaches and their strengths, weaknesses, possibilities, and limitations, including:
    • The extent to which current AI systems model ("understand") the world, and whether/how this "understanding" might be improved.
  2. The potential for AI to improve its own code, and for a so-called Singularity to arise out of this.
  3. The risks and pitfalls of AI for humanity and all life on this planet, including:
    • Bias
    • Deepfakes
  4. The potential rewards and benefits of AI to humanity and all life on this planet.
  5. Whether or not the risks and pitfalls outweigh the rewards and benefits.
  6. How AI should be governed, if and where possible.
Some of these aspects are sociopolitical, but the other active founders have given me the go-ahead to create this thread in the public forums anyway, given that I've proposed to create a companion thread in the opt-in forums (here's a link to that companion thread) for any posts that delve into the more objectionable sociopolitical areas: those that are sectarian and contentious, especially of the Left versus Right variety.

Fair warning then: those more contentious and/or sectarian sociopolitical posts are best posted in the companion thread in the opt-in forums (and feel free to link to your post there from this public thread), otherwise, I or another moderator might move them there. This might be a balancing act that we might not always get right, so please be tolerant of any mistakes that we make.

Although this is not a resource-only thread, resources are very welcome, and I'll follow up with some posts seeding it with a bunch of resources.
(This post was last modified: 2023-11-07, 10:33 AM by Laird. Edited 1 time in total. Edit Reason: Added a link to the companion thread )
[-] The following 1 user Likes Laird's post:
  • Ninshub
To begin the seeding of resources in this thread, here are all of the podcasts of The Ezra Klein Show that I could find that discuss AI, because they're very interesting (I've listened to all of them), and because Ezra has interviewed many of the heavyweights in this field. I've mostly linked to them as Google podcasts, because the Google player controls work better for me, but I've linked to some as Apple podcasts because they mysteriously weren't present on Google.

(Jun 4, 2021) Is A.I. the Problem? Or Are We?

(Jun 11, 2021) Sam Altman on the A.I. Revolution, Trillionaires and the Future of Political Power

(Dec 20, 2022) Best Of: Is A.I. the Problem? Or Are We? (a repeat with new prefatory comments of the above Jun 4, 2021 episode)

(Dec 16, 2022) What I'm Thinking About at the End of 2022 (only partly about A.I.)

(Dec 23, 2022) Best Of: Who Wins — and Who Loses — in the A.I. Revolution? (a repeat with new prefatory comments of the Jun 11, 2021 episode with Sam Altman)

(Jan 6, 2023) A Skeptical Take on the A.I. Revolution (a link to which I'd posted in the earlier thread Another demonstration of chatGPT 4.0 capabilities, and which David endorsed and reposted a couple of days later in that same thread).

(Feb 24, 2023) Inside the Minds of Spiders, Octopuses and Artificial Intelligence (also only partly about A.I.)

(Mar 19, 2023) My View on A.I.

(Mar 21, 2023) Freaked Out? We Really Can Prepare for A.I.

(Apr 7, 2023) Why A.I. Might Not Take Your Job or Supercharge the Economy

(Apr 11, 2023) What Biden’s Top A.I. Thinker Concluded We Should Do

(May 2, 2023) The Culture Creating A.I. Is Weird. Here’s Why That Matters.

(Jul 11, 2023) A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has.
(This post was last modified: 2023-11-07, 11:36 PM by Laird. Edited 1 time in total.)
[-] The following 1 user Likes Laird's post:
  • Silence
This post addresses one of the aspects for discussion that I suggested in this thread's opening post:

The extent to which current AI systems model ("understand") the world, and whether/how this "understanding" might be improved.

I found an interesting insight on this from Geoffrey Hinton in an interview with him on CBS seven months ago titled (on YouTube) Full Interview: "Godfather of artificial intelligence" talks impact and potential of AI. His insight comes at 33m11s into the interview, when he starts off this exchange with the interviewer:

GH: You can ask me the question, "Some people say that these big models are just autocomplete"...

Interviewer: Well, on some level, the models are autocomplete. We're told that the large language models - they're just predicting the next word. Is that not so simple?

GH: No, that's true, they are just predicting the next word, and so they're just auto-complete - but, ask yourself the question of what do you need to understand about what's been said so far in order to predict the next word accurately, and, basically, you have to understand what's been said to predict what comes next - so you're just autocomplete too.

He goes on to provide an interesting example of the understanding required of an AI to translate a particular simple sentence into French.

I later came across a couple of research papers which demonstrate that modern AI systems model reality in specific ways, suggestive of the possibility that language models like, say, ChatGPT, model, and, in some sense - as Geoffrey Hinton seems to suggest - "understand" reality more broadly.

The first is Implicit Representations of Meaning in Neural Language Models by Belinda Z. Li, Maxwell Nye, and Jacob Andreas, 1 June, 2021. Here's the abstract:

Quote:Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the
world they describe? In BART and T5 transformer language models, we identify contextual word representations that function as models of entities and situations as they evolve throughout a discourse. These neural representations have functional similarities to linguistic models of dynamic semantics: they support a linear readout of each entity’s current properties and relations, and can be manipulated with predictable effects on language generation. Our results indicate that prediction in pretrained neural language models is supported, at least in part, by dynamic representations of meaning and implicit simulation of entity state, and that this behavior can be learned with only text as training data.

Their paper comes with the caveat that their "experiments do not explore the extent to which LMs encode static background knowledge, but instead the extent to which they can build representations of novel situations described by novel text."

Nevertheless, it's very suggestive.

The second paper is EMERGENT WORLD REPRESENTATIONS: EXPLORING A SEQUENCE MODEL TRAINED ON A SYNTHETIC TASK by Kenneth Li, Aspen K. Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg, 27 February, 2023. Here's the abstract:

Quote:Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question in a synthetic setting by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network. By leveraging these intervention techniques, we produce “latent saliency maps” that help explain predictions.

Again, the paper is limited to a specific type of representation in AI systems, but suggestive of AI system representation - modelling/"understanding" - in general.

Even if, though, AI systems are modelling reality, they nevertheless are prone to making stuff up, i.e., "hallucinating". In the latest Ezra Klein Show podcast linked in my post above ("A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has."), Demis Hassabis of Google DeepMind suggests some possible ways to avoid this problem, including assigning probability to predictions, as DeepMind's AlphaFold system does, as well as incorporating a mechanism to look up and corroborate with references online.

I'm reconsidering, then, the sentiments that I expressed in this earlier post: that the current deep learning models seem to fundamentally lack a facility for (mimicry of) understanding, and that simply scaling them up cannot remedy this.

I now wonder whether higher cognitive skills such as mathematical and logical reasoning might be achievable simply by scaling up these deep-learning systems, without any need to marry them with more explicitly-coded symbolic reasoning modules. It's a very interesting possibility.

Further resources:

Gary Marcus (as heard in that Ezra Klein Show episode "A Skeptical Take on the A.I. Revolution" which prompted me to express those earlier sentiments in that earlier post) again: Why Robot Brains Need Symbols on Medium on 30 October, 2019.

Geoffrey Hinton (et al) again (as seen in the YouTube video above) in a review article (as referenced by Gary in his Medium article linked just above): Deep learning in Nature on 28 May, 2015.
Why transformative AI is really, really hard to achieve

Quote:Should AI be set apart from other great inventions in history? Could it, as the great academics John Von Neumann and I.J. Good speculated, one day self-improve, cause an intelligence explosion, and lead to an economic growth singularity?

Neither this essay nor the economic growth literature rules out this possibility. Instead, our aim is to simply temper your expectations. We think AI can be “transformative” in the same way the internet was, raising productivity and changing habits. But many daunting hurdles lie on the way to the accelerating growth rates predicted by some.

Quote:Here is a brief outline of our argument:
  1. The transformational potential of AI is constrained by its hardest problems
  2. Despite rapid progress in some AI subfields, major technical hurdles remain
  3. Even if technical AI progress continues, social and economic hurdles may limit its impact
Here's a resource for another of the aspects for discussion that I suggested in this thread's opening post:

The question of whether an instance of AI could ever be or become conscious.

It's an essay by David Chalmers in the Boston Review of 9 August, 2023 (an edited version of a talk he gave on 28 November, 2022), Could a Large Language Model Be Conscious? (incidentally, it's where I found the links to the two research papers I shared two posts above, and I found it itself by following a link from something Sci shared a little while back, but by now I forget what that was).

Here are the first four paragraphs from the essay's conclusion (footnotes elided):

Quote:Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.

Where future LLMs and their extensions are concerned, things look quite different. It seems entirely possible that within the next decade, we’ll have robust systems with senses, embodiment, world models and self models, recurrent processing, global workspace, and unified goals. (A multimodal system like Perceiver IO already arguably has senses, embodiment, a global workspace, and a form of recurrence, with the most obvious challenges for it being world models, self models, and unified agency.) I think it wouldn’t be unreasonable to have a credence over 50 percent that we’ll have sophisticated LLM+ systems (that is, LLM+ systems with behavior that seems comparable to that of animals that we take to be conscious) with all of these properties within a decade. It also wouldn’t be unreasonable to have at least a 50 percent credence that if we develop sophisticated systems with all of these properties, they will be conscious. Those figures together would leave us with a credence of 25 percent or more. Again, you shouldn’t take the exact numbers too seriously, but this reasoning suggests that on mainstream assumptions, it’s a serious possibility that we’ll have conscious LLM+s within a decade.
On the subjects of both the amazing extent of the unfolding capabilities of LLM AI ChatGPT4, and a "free will is an illusion" talk:

I have to say that this Large Language processing AI has amazed me by its creepy and uncannily seemingly sentient and intelligent responses. In particular this is the case with its demolition of one of leading atheist materialist Sam Harris's premises, that there is no free will - it is completely an illusion. This is obviously self-defeating and deeply ironic, showing in itself that the claim is invalid.

This topic was well covered in an article by William Dembski in Evolution News, at https://evolutionnews.org/2023/11/on-fre...am-harris/ .

Leading atheist Sam Harris gave a lecture on how free will is really entirely an illusion, and leading ID theorist and mathematician William Dembski decided to test ChatGPT4 with a probing question about it:
 
He queried ChatGPT4 as follows:

Quote:"Consider the following scenario: A well-known atheist, who has a doctorate in neuroscience, is about to present a lecture to a university audience on the connection between mind and brain, and in particular on the topic of free will. This atheist, whom we’ll call Sam, begins his lecture with the following statement: “Tonight, I want to try to convince you that free will is an illusion.” Please comment at length on the irony of this statement."


On being thus asked to explicate on the irony of Harris's talk, ChatGPT4 answered as follows (Dembski chose not to comment):

Quote:"The statement “Tonight, I want to try to convince you that free will is an illusion” made by Sam, a well-known atheist and neuroscientist, at the start of his lecture on the connection between mind and brain, particularly focusing on free will, is layered with irony in several ways:

The Nature of Persuasion and Choice: The irony lies primarily in the act of persuasion itself. Sam’s objective is to convince his audience of a particular viewpoint — that free will does not exist. However, the very act of attempting to persuade others suggests that they have the capacity to choose or reject his argument, which inherently implies the existence of free will. If free will were truly an illusion, the concept of changing someone’s mind through argument or evidence would be meaningless, as their beliefs and responses would be predetermined.

The Role of an Educator: As an educator and a figure of authority in neuroscience, Sam’s role is to impart knowledge and provoke thought. By encouraging his audience to reflect and possibly reconsider their views, he is engaging their critical thinking and decision-making abilities. This process assumes that the audience has the autonomy to assess and integrate new information, which again presumes the existence of free will.

The Paradox of Scientific Inquiry: The field of neuroscience often explores the workings of the human brain, including the mechanisms behind decision-making and consciousness. The endeavor to understand these complex processes assumes that there is something to be discovered or understood about how we make choices. If free will were purely an illusion, the entire premise of studying these aspects of the brain might seem futile or paradoxical.

Atheism and Free Will: Sam’s known stance as an atheist adds another layer to the irony. Atheism often involves a conscious rejection of religious or theistic worldviews in favor of rationalism and empiricism. This rejection itself is typically viewed as a choice made freely based on personal judgment and reasoning, further highlighting the paradox in claiming free will to be an illusion.

The Complexity of Human Consciousness: The topic of free will vs. determinism is deeply intertwined with our understanding of consciousness. Sam’s attempt to reduce this complexity to a binary state of illusion or reality oversimplifies a nuanced and still not fully understood aspect of human existence. The irony is also in the simplification of a complex issue in a field (neuroscience) that is known for its depth and complexity.

Audience’s Reception: There is also an irony in how the audience might receive this message. If they are convinced by Sam’s argument, it could be seen as a validation of his view. However, if they exercise their judgment and disagree, it could be viewed as an expression of their free will, contradicting his initial assertion.

In summary, the statement is ironic because it uses the mechanism of free will (persuasion and choice) to argue against the existence of free will. It highlights the complexities and paradoxes inherent in discussions about human consciousness, decision-making, and the nature of belief."

My only response at present is to find this excellent, and a great example of a response that realistically and really seems to indicate that here is a sentient conscious entity doing the reasoning and concluding. Of course I know that that impression is really of a clever illusion, but part of me is inclined to believe it.
(This post was last modified: 2023-11-23, 05:08 PM by nbtruthman. Edited 1 time in total.)
[-] The following 1 user Likes nbtruthman's post:
  • Typoz
Those aren't really good arguments against Harris' argument, which largely turns on determinism being a logical necessity...something I would argue has never been shown, least of all by Harris...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Typoz
(2023-11-23, 03:22 PM)Sciborg_S_Patel Wrote: Computer vision seems pretty poor if going by driverless cars...

More importantly I don't expect human vision to be akin to a program in  Turing machine. Those kinds of materialist religious beliefs is how we ended up dealing with the driverless car menace in the first place...

You seem to falsely assume that AI technology already has peaked, but in reality what we have now is just the beginning. It will soon get lot worse….. https://www.reuters.com/technology/sam-a...023-11-22/
(This post was last modified: 2023-11-23, 07:05 PM by sbu. Edited 1 time in total.)
(2023-11-23, 07:05 PM)sbu Wrote: You seem to falsely assume that AI technology already has peaked, but in reality what we have now is just the beginning. It will soon get lot worse….. https://www.reuters.com/technology/sam-a...023-11-22/

"Sources say" Wink
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2023-11-23, 07:17 PM)Sciborg_S_Patel Wrote: "Sources say" Wink

Yes alternatively their marketing department is on fire. We will soon learn if that’s the case (I’m broadly speaking about the AI business her)
(This post was last modified: 2023-11-23, 07:30 PM by sbu. Edited 1 time in total.)
[-] The following 2 users Like sbu's post:
  • stephenw, Sciborg_S_Patel

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)