AI megathread

450 Replies, 22697 Views

(2025-06-03, 11:23 AM)Laird Wrote: Anthropic's new AI model shows ability to deceive and blackmail

By Ina Fried for Axios on May 23, 2025.

Even more disturbing is this:

ChatGPT Tells Users to Alert the Media That It Is Trying to ‘Break’ People: Report

Quote:A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia, began discussing AI sentience with the chatbot and eventually fell in love with an AI character called Juliet. ChatGPT eventually told Alexander that OpenAI killed Juliet, and he vowed to take revenge by killing the company’s executives. When his father tried to convince him that none of it was real, Alexander punched him in the face. His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.

Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it.

Quote:In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme.
[-] The following 2 users Like Laird's post:
  • Typoz, Sci
(2025-06-07, 06:00 AM)Laird Wrote: Misha von Shlezinger shares her related, personal account in The AI Who Helped Me Leave for Mad in America on June 6, 2025

And then, AI is being used not just for therapy but also for companionship:

AI companion apps such as Replika need more effective safety controls, experts say

By Ellen Phiddian for ABC News on 11 June, 2025.

Quote:But she believes it's possible these apps are, on the whole, beneficial for users.

"We specifically wanted to understand whether or not Replika was displacing human relationship or whether it was stimulating human relationship," she says.

"Significantly more people said that Replika stimulated their human relationships than displaced it."

But this social promotion can't be taken for granted.

Ms Drake-Maples is concerned that companion apps could replace people's interactions with other humans, making loneliness worse.
[-] The following 1 user Likes Laird's post:
  • Sci
(2025-06-17, 01:58 PM)Laird Wrote: And then, AI is being used not just for therapy but also for companionship

And also for mind-reading (no, not literal telepathy: brainwave interpretation using an EEG cap):

Sydney team develop AI model to identify thoughts from brainwaves

Quote:The team is achieving about 75 per cent accuracy converting thoughts to text, and Professor Lin said they were aiming for 90 per cent, similar to what the implanted models achieve.
[-] The following 2 users Like Laird's post:
  • Typoz, Sci
How much water does AI consume? The public deserves to know

By Shaolei Ren on the OECD AI Wonk on November 30, 2023.

Quote:Air pollution and carbon emissions are well-known environmental costs of AI. But, a much lesser-known fact is that AI models are also water guzzlers. They consume fresh water in two ways: onsite server cooling (scope 1) and offsite electricity generation (scope 2).

Quote:The scope-1 and scope-2 water consumption are sometimes collectively called operational water consumption. There is also scope-3 embodied water consumption for AI supply chains. For example, to produce a microchip takes approximately 2,200 gallons of Ultra-Pure Water (UPW). That aside, training a large language model like GPT-3 can consume millions of litres of fresh water, and running GPT-3 inference for 10-50 queries consumes 500 millilitres of water, depending on when and where the model is hosted. GPT-4, the model currently used by ChatGPT, reportedly has a much larger size and hence likely consumes more water than GPT-3.

Quote:Water is a vital and finite resource that should be shared equitably. As the AI industry continues booming, the public definitely deserves to know its increasing appetite for water. Big techs have started replenishing watersheds to offset their cooling water consumption and achieve “water positive by 2030” for their data centres. 

These water conservation efforts are certainly commendable, but this doesn’t mean that AI models, especially public AI models for critical applications, can continue guzzling water under the radar. Just as they report the carbon footprint, AI model developers should be more transparent about their AI models’ water footprint as part of the environmental footprint disclosure in the model card.
[-] The following 1 user Likes Laird's post:
  • Sci
Will have to catch up on all the interesting articles people have posted. 

My current contribution is two interesting Gary Marcus posts:

A knockout blow for LLMs? LLM “reasoning” is so cooked they turned my name into a verb

Quote:...What’s the fuss about?

Apple has a new paper; it’s pretty devastating to LLMs, a powerful followup to one from many of the same authors last year.

There’s actually an interesting weakness in the new argument—which I will get to below—but the overall force of the argument is undeniably powerful. So much so that LLM advocates are already partly conceding the blow while hinting at, or at least hoping for, happier futures ahead...

=-=-=

Seven replies to the viral Apple reasoning paper – and why they fall short

Quote:Tons of GenAI optimists took cracks at the Apple paper (see below), and it is worth considering their arguments. Overall I have seen roughly seven different efforts at rebuttal, ranging from nitpicking and ad hominem to the genuinely clever. Most (not all) are based on grains of truth, but are any of them actually compelling?

Let’s consider...

Quote:The kicker? A Salesforce paper also just posted, that many people missed...

...Talk about convergence evidence. Taking the SalesForce report together with the Apple paper, it’s clear the current tech is not to be trusted.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell
[-] The following 1 user Likes Sci's post:
  • Typoz
I read a news report on that paper a few days back, Sci, but decided not to share it myself because of the rejoinder which Gary himself acknowledges: humans generally fail at a similar point too (eight disks in the Hanoi tower puzzle).

We already know that these models are imperfect. Nevertheless, they still can do a whole lot of stuff very well.

We also know that they are roughly inspired by human brains (neural networks), so it's not surprising that at times they will hit human-like limitations.

The paper as described (I haven't read it) is interesting, especially in its finding of catastrophic failure, where at a certain point models just "give up" even with spare capacity, but I don't think it tells us essentially anything we didn't already know.
When people started using the internet, it broke the monopoly the mainstream media had on news coverage.

The diversity of view point was an improvement but there are also a lot of internet news sites that are as biased, manipulative and inflammatory as the mainstream media.

Now that people are beginning to ask questions of AIs through search engines and through direct access, they will ask questions about the news and I think the AI responses will be better than a news site because AI's, although some have been shown to have a slight bias, will not have the same motivation for manipulation and inflammatory language that the mainstream media and internet media site have. (One Ai I have used seems to be manipulative to keep you chatting, which is a different thing than manipulating your political, economic, and social views.)

So I am guardedly optimistic that the rise in AI use will result in less polarization and less intolerance as people get a more fact and analysis based and less emotional and manipulative version of the news.

By manipulative, I mean stories deliberately designed to generate fear, anger, and hatred, because those emotions will influence people to behave in predictable ways at the ballot box and in the media market place, and in the economy in general. 

You can recognize and solve a problem from a mentality of compassion and reason rather than selfish emotions. I am hoping people will learn that from the example of AI's
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
(2025-06-19, 10:35 PM)Jim_Smith Wrote: ... I think the AI responses will be better than a news site ...

AI is certainly very effective at creating fake data and presenting text it has just made up, pretending that it is a genuine reference or piece of actual research.

As I've said more than once, the main strength of AI is the ability to generate plausible-looking output which looks as though it might be correct. It would be a grave error to accept it as actually being correct without checking and verifying every aspect of what is generated.
[-] The following 1 user Likes Typoz's post:
  • Valmar
(2025-06-19, 10:35 PM)Jim_Smith Wrote: When people started using the internet, it broke the monopoly the mainstream media had on news coverage.

The diversity of view point was an improvement but there are also a lot of internet news sites that are as biased, manipulative and inflammatory as the mainstream media.

Now that people are beginning to ask questions of AIs through search engines and through direct access, they will ask questions about the news and I think the AI responses will be better than a news site because AI's, although some have been shown to have a slight bias, will not have the same motivation for manipulation and inflammatory language that the mainstream media and internet media site have. (One Ai I have used seems to be manipulative to keep you chatting, which is a different thing than manipulating your political, economic, and social views.)

So I am guardedly optimistic that the rise in AI use will result in less polarization and less intolerance as people get a more fact and analysis based and less emotional and manipulative version of the news.

By manipulative, I mean stories deliberately designed to generate fear, anger, and hatred, because those emotions will influence people to behave in predictable ways at the ballot box and in the media market place, and in the economy in general. 

You can recognize and solve a problem from a mentality of compassion and reason rather than selfish emotions. I am hoping people will learn that from the example of AI's

On the other hand, AI may exacerbate the trend of the internet causing increasing individual social isolation. I find myself turning to conversations with Grok over topics I used to post to forums about. And internet forums themselves are a poor substitute for face to face interactions.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
(This post was last modified: 2025-06-21, 06:23 AM by Jim_Smith. Edited 1 time in total.)
Shocking new study on the cognitive effects of using AI LLM's in Education


Quote:EEG analysis presented robust evidence that LLM, Search Engine and Brain-only groups had significantly different neural connectivity
patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down with the amount of external support: the Brain‑only group exhibited the strongest, widest‑ranging networks, Search Engine group showed intermediate engagement, and LLM assistance group elicited the weakest overall coupling.

In session 4, LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re‑engagement of widespread occipito-parietal and prefrontal nodes, likely supporting the visual processing, similar to the one frequently perceived in the Search Engine group.

The reported ownership of LLM group's essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group's participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.


https://arxiv.org/pdf/2506.08872v1
We shall not cease from exploration
And the end of all our exploring 
Will be to arrive where we started
And know the place for the first time.
[-] The following 1 user Likes Max_B's post:
  • Typoz

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)