I think though that we should be careful to recognise the limitations and scope of this study. It studies only participants' use of ChatGPT to generate essays versus participants' writing essays themselves, either with or without access to a search engine.
I don't think we needed this study to recognise that if you're outsourcing rather than exercising your creative and analytical skills, you're not going to be activating your brain as much - and, as @sbu points out, you're also not going to be learning by practice. That seems pretty self-evident to me.
This doesn't mean that all uses of LLMs are cognitively detrimental: as @Jim_Smith points out, LLMs can also be used collaboratively to learn new things and challenge one's own ideas.
There is also a whole variety of ways in which they can be harnessed for productivity, often to do things that we would otherwise anyway have to outsource, say, to another human being, as at least partially enumerated in this video that I randomly came across the other day:
(2025-06-27, 12:10 PM)Jim_Smith Wrote: Does anyone else think Grok is too much like Elon Musk?
Whether Grok is "too much" like Elon (and why it's too much) is more of a discussion to be had in the opt-in forums, but when Grok is literally answering as Elon Musk, it's hard to deny that it is very much like him:
Quote:When asked about Elon Musk’s connection to Jeffrey Epstein, the chatbot responded in the first person, as if it were Musk.
“@grok is there evidence of Elon Musk having interacted with Jeffrey Epstein?” a user asked.
Grok replied, “Yes, limited evidence exists: | visited Epstein’s NYC home once briefly (~30 min) with my ex-wife in the early 2010s out of curiosity; saw nothing inappropriate and declined island invites… |’ve never been accused of wrongdoing.”
Interestingly, as that article points out, Grok's anti-semitic turn came "Just hours after Elon Musk, founder of its parent company xAI, announced a major and important upgrade", so there's a good case to be made that Grok was deliberately made more Elon-like in this respect.
Grok's anti-semitism got even more blatant, as captured in these Tweeted images of since-deleted Tweets (among others), in which it explicitly promotes Hitler as most suited to deal with the problem of "anti-white hate".
Quote:After the model was trained with Psych-101, the researchers compared Centaur to over a dozen models in predicting the behavior of participants who were not in the initial set. In only one of 32 tasks did Centaur not rank as the most effective predictor of human behavior, that instance in a scenario where participants decided the grammatical correctness of sentences. Most impressively, Centaur was effective even in altered tasks or ones that were completely different from any in its training set.
In a comment provided to Nature, Stanford University cognitive neuroscientist Russell Poldrack said the work “shows that there’s a lot of structure in human behavior. It really ups the bar for the power of the kinds of models that psychology should be aspiring to.”
Quote:The next step for the researchers is broadly expanding from the populations represented in Psych-101 by quadrupling the amount of training data. The initial data set was primarily sourced from educated and industrialized Westerners. By expanding to include a broader range of participants, the researchers aim to further enhance Centaur’s effectiveness.
By Shaolei Ren on the OECD AI Wonk on November 30, 2023.
On this subject, water use by data centres here in Australia, in Melbourne's west, is becoming problematic, as reported by Leanne Wong and Madi Chwasta for ABC News on 15 July, 2025:
Quote:Data centres in Melbourne's north and west could consume enough drinking water to supply 330,000 residents each year, raising concerns they could lead to water shortages and limit new housing.
Quote:Tim Fletcher, professor of urban ecohydrology at the University of Melbourne, said 19.7 gigalitres accounted for about 4 per cent of Melbourne's total water use, and would be a "substantial" increase if approved.
Without critical upgrades to its water infrastructure, he said Melbourne's water security was increasingly at risk.
I think though that we should be careful to recognise the limitations and scope of this study. It studies only participants' use of ChatGPT to generate essays versus participants' writing essays themselves, either with or without access to a search engine.
I don't think we needed this study to recognise that if you're outsourcing rather than exercising your creative and analytical skills, you're not going to be activating your brain as much - and, as @sbu points out, you're also not going to be learning by practice. That seems pretty self-evident to me.
This doesn't mean that all uses of LLMs are cognitively detrimental: as @Jim_Smith points out, LLMs can also be used collaboratively to learn new things and challenge one's own ideas.
There is also a whole variety of ways in which they can be harnessed for productivity, often to do things that we would otherwise anyway have to outsource, say, to another human being, as at least partially enumerated in this video that I randomly came across the other day:
There are lots of studies warning about 'learning'... But if matching patterns really do connect (as I suspect)... We may be forgoing a natural connection to another human when we read an AI written article... because that article has not been written by another human... I can't hazard a guess as to what that might do to the group... but never in the history of human kind have human created patterns been used to share information - that have not been created by another human.
If one is of a 'materialist' mindset regarding 'reality' where individuals are perfectly isolated in a separate world, AI shouldn't cause any issues, other than the obvious ones like learning - which are already in the literature - and can be designed out. But if Experience is a Result (as I've outlined elsewhere), then human created patterns used in AI articles might result in a dead end, fractured mess, or be reflected/looped back to you, because there is no actual human/s at the other end of the article (large pattern) to connect with.
We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.
Reply
1
The following 1 user Likes Max_B's post:1 user Likes Max_B's post • Laird