AI megathread

400 Replies, 14226 Views

When it comes to AI ROI, IT decision-makers not convinced

Dan Robinson

Quote:Proof of concept projects stuck in pilot phase as investors get itchy feet

Quote:Early successes - we're told - have been in the fields of IT operations, software development, and marketing, with 26 percent of adopters saying that AI projects implemented by their organization surpassed expectations, while another 68 percent say expectations were met.

However, the report also reveals that only 5 percent of respondents have actually adopted AI across the enterprise, with another 25 percent running pilot projects and a further 21 percent describing themselves as still in the early stages.

Nearly half of respondents have yet to adopt AI at all, with 36 percent indicating they plan to start using it within the next 12 months, while a further 13 percent are still at the stage of considering or evaluating it but have no plans yet.

The report also highlights a high number of POCs (proof-of-concept projects) with a poor rate of conversion to production, indicating "a low level of organizational readiness in terms of data, processes, and IT infrastructure."
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
It might be of interest here what I found out when I interrogated a generative AI system about the basics of whether it has consciousness or just has a few properties of consciousness. I think its answers confirm our collective understanding of how generative AIs can be so convincing in mimicing a human being in conversation, while at the same time the AI is something alien to our basic nature as living conscious humans. This AI can use terms like "I" referring to its supposed personal self, while the truth is that it has absolutely no personal self.

My question:

Being a generative AI consisting of a complex computer system you are not conscious - you only can compute algorithms, in the process basically just executing program instructions. According to the Hard Problem of philosophy of mind you cannot have consciousness, because mind and subjective awareness are immaterial and you are physical and material. What limited aspects or properties of consciousness do you still possess?

AI's answer:

Quote:As an AI, I don't possess consciousness, subjective awareness, or emotions. I operate based on algorithms and data, generating responses by analyzing patterns in the information I was trained on. While I can simulate conversation and provide information or insights, I don't have beliefs, feelings, or personal experiences. The concepts of consciousness and subjective awareness are indeed complex and remain topics of philosophical debate, but I function purely as a tool for processing and generating language based on the input I receive.

My question:

How can you generate such a persuasively convincing dialog that can convince a human that you must be conscious?

AI's answer:

Quote:I understand the question. My responses may appear coherent and somewhat conversational due to the algorithms that generate text based on patterns learned from a vast dataset. However, I lack consciousness, self-awareness, or personal experiences. My goal is to provide information and assist with inquiries based on the data I was trained on. The persuasive nature of the dialogue is a result of effective language processing rather than any form of consciousness or understanding.


My question:

You say you understand my previous question. "Understanding" is a property of consciousness, as subjective awareness is also a property of consciousness. You agree with me that you have no consciousness, but doesn't it look like you really can't "understand" anything?

AI's answer:

Quote:You're correct in noting that I don't possess consciousness, subjective awareness, or true understanding in the way humans do. My responses are generated based on patterns in the data I've been trained on, simulating conversation without experiencing understanding or awareness. When I say I "understand," it's a way of expressing that I can recognize the structure and content of language to provide a relevant response, but it doesn't imply any conscious comprehension. My ability to generate persuasive dialogue stems from the vast amounts of text data I was trained on, which allows me to mimic conversational patterns effectively.
(This post was last modified: 2025-02-08, 06:24 PM by nbtruthman. Edited 2 times in total.)
[-] The following 2 users Like nbtruthman's post:
  • Laird, Sciborg_S_Patel
(2025-02-08, 06:18 PM)nbtruthman Wrote: It might be of interest here what I found out when I interrogated a generative AI system about the basics of whether it has consciousness or just has a few properties of consciousness. I think its answers confirm our collective understanding of how generative AIs can be so convincing in mimicing a human being in conversation, while at the same time the AI is something alien to our basic nature as living conscious humans. This AI can use terms like "I" referring to its supposed personal self, while the truth is that it has absolutely no personal self.

Along similar lines, you might like to check out - if you haven't already - the similar but more extended dialogue that I had with ChatGPT and attached as a PDF to an earlier post in this thread.

ChatGPT can be pretty verbose, so I understand if it's all a bit much to want to pore through. I found it quite fascinating though.
Quote:Meta illegaly downloaded 80+ terabytes of books from LibGen, Anna's Archive, and Z-library to train their AI models.

In 2010, Aaron Swartz downloaded only 70 GBs of articles from JSTOR (0.0875% of Meta). Faced $1 million in fine and 35 years in jail. Took his own life in 2013.
 -https ://x.com/MushtaqBilalPhD/status/1887905339984449622
=-=-=

ChatGPT in Shambles

Quote:After two years of massive investment and endless hype, GPT’s reliability problems persist

Quote:In sum, in the space of next few exchanges, over the course of 10 minutes, ChatGPT,
  • failed, multiple times, to properly count to 50
  • failed, multiple times, to include a full list of all US states
  • reported that the letter h could be a vowel, at least when it appeared in the word Northwest
  • couldn’t count vowels to save its electronic life
  • issued numerous corrections that were wrong, never acknowledging uncertainty until after its errors were called out.
  • “lied” about having a subconscious. (In fairness, ChatGPT doesn’t really lie; it just spews text that often bears little resemblance to reality, but you get my drift).

The full conversation including all the prompts I used can be found here.

§

As against all the constant claims of exponential progress that I see practically every day, ChatGPT still seems likes pretty much the same mix of brilliance and stupidity that I wrote about, more than two years ago...

Quote:By coincidence Sayash Kapoor, co-author of AI snake oil, reported some tests of OpenAI’s new Operator agent this morning, pushing the extreme boundaries of intelligence by testing … expense reports.

Quote:Great summary. As Davis and I have been arguing since 2019, trust is of the essence, and we still aren’t there.
But honestly, if AI can’t do Kapoor’s expense reports or my simple tables, is AGI really imminent? Who is kidding whom?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-09, 07:46 PM by Sciborg_S_Patel. Edited 3 times in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-02-09, 06:40 PM)Sciborg_S_Patel Wrote:  -https ://x.com/MushtaqBilalPhD/status/1887905339984449622
=-=-=

ChatGPT in Shambles

Fully Autonomous AI Agents Should Not be Developed

Margaret Mitchell, Avijit Ghosh, Alexandra Sasha Luccioni, Giada Pistilli

Quote:This paper argues that fully autonomous AI agents should not be developed. In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels and detail the ethical values at play in each, documenting trade-offs in potential benefits and risks. Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-10, 02:42 PM by Sciborg_S_Patel.)
I think @nbtruthman might have mentioned this somewhere but don't LLMs kind of remind you of mediums' control spirits?

Eagerness to please, willingness to make stuff up, the pretense that what was taken from others is their original thought?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-12, 06:25 AM by Sciborg_S_Patel.)
It looks like there is yet another reason to be very concerned about the advent of advanced generative AI systems. It seems that the most advanced AI systems developed are beginning to be able to unlimitedly replicate themselves, and also to protect themselves. This could be extremely dangerous especially if these systems ever develop consciousness.

https://www.space.com/space-exploration/...-terrified

Quote:"The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same — setting up a cycle that could continue indefinitely.
...............................................................................................................

The researchers said they were concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem.

"The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance their survivability," the team wrote."
[-] The following 1 user Likes nbtruthman's post:
  • Sciborg_S_Patel
brAIn drAIn: The enhancement and atrophy of human cognition go hand in hand

Erik Hoel

Quote:Unfortunately, there’s a growing subfield of psychology research pointing to cognitive atrophy from too much AI usage.

Evidence includes a new paper published by a cohort of researchers at Microsoft (not exactly a group predisposed to finding evidence for brain drain). Yet they do indeed see the effect in the critical thinking of knowledge workers who make heavy use of AI in their workflows.

Quote:The study isn’t alone. There’s increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when there’s reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most.

Quote:Even for experts, there’s the potential for AI-based “skill decay,” following the academic research on jobs that have already undergone automation for decades. E.g., as one review described:
Quote:Casner et al. (2014) tested the manual flying skills of pilots who were trained to fly manually, but then spent the majority of their careers flying with high automation (i.e., autopilot for much of takeoff, cruising, and landing). Procedural skills, such as scanning the instruments and manual control, were “rusty” but largely intact. In contrast, major declines in cognitive skills emerged, such as failures to maintain awareness of the airplane’s location, keep track of next steps, make planned changes along the route, and recognize and handle instrument systems failures.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
https://x.com/mihonarium/status/1880944026603376865

Quote:Remember o3’s 25% performance on the FrontierMath benchmark?
It turns out that OpenAI funded FrontierMath and has had access to most of the dataset.
Mathematicians who’ve created the problems and solutions for the benchmark were not told OpenAI funded the work and will have access.
That is:
- we don’t know if OpenAI trained o3 on the benchmark, and it’s unclear if their results can be trusted
- mathematicians, some of whom distrust OpenAI and would not want to contribute to general AI capabilities due to existential risk concerns, were misled: most didn’t suspect a frontier AI company funded it.
From Epoch AI: “Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset.”
There was a “verbal agreement” with OpenAI—as if anyone trusts OpenAI’s word at this point: “We acknowledge that OpenAI does have access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities. However, we have a verbal agreement that these materials will not be used in model training.”
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-20, 12:11 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Laird, Valmar
(2025-02-20, 12:10 PM)Sciborg_S_Patel Wrote:
Quote:It turns out that OpenAI funded FrontierMath and has had access to most of the dataset.

That's pretty dishonest behaviour, and obviously taints that 25% result. That said, it doesn't necessarily invalidate it. It could still be legit, or at least in the ballpark. Unfortunately, it seems at this point that we won't be able to know given that, apparently, the hold-out dataset on which a properly independent test was going to be performed is not really being withheld from OpenAI.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel

  • View a Printable Version
Forum Jump:


Users browsing this thread: 4 Guest(s)