AI megathread

395 Replies, 13984 Views

When it comes to AI ROI, IT decision-makers not convinced

Dan Robinson

Quote:Proof of concept projects stuck in pilot phase as investors get itchy feet

Quote:Early successes - we're told - have been in the fields of IT operations, software development, and marketing, with 26 percent of adopters saying that AI projects implemented by their organization surpassed expectations, while another 68 percent say expectations were met.

However, the report also reveals that only 5 percent of respondents have actually adopted AI across the enterprise, with another 25 percent running pilot projects and a further 21 percent describing themselves as still in the early stages.

Nearly half of respondents have yet to adopt AI at all, with 36 percent indicating they plan to start using it within the next 12 months, while a further 13 percent are still at the stage of considering or evaluating it but have no plans yet.

The report also highlights a high number of POCs (proof-of-concept projects) with a poor rate of conversion to production, indicating "a low level of organizational readiness in terms of data, processes, and IT infrastructure."
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
It might be of interest here what I found out when I interrogated a generative AI system about the basics of whether it has consciousness or just has a few properties of consciousness. I think its answers confirm our collective understanding of how generative AIs can be so convincing in mimicing a human being in conversation, while at the same time the AI is something alien to our basic nature as living conscious humans. This AI can use terms like "I" referring to its supposed personal self, while the truth is that it has absolutely no personal self.

My question:

Being a generative AI consisting of a complex computer system you are not conscious - you only can compute algorithms, in the process basically just executing program instructions. According to the Hard Problem of philosophy of mind you cannot have consciousness, because mind and subjective awareness are immaterial and you are physical and material. What limited aspects or properties of consciousness do you still possess?

AI's answer:

Quote:As an AI, I don't possess consciousness, subjective awareness, or emotions. I operate based on algorithms and data, generating responses by analyzing patterns in the information I was trained on. While I can simulate conversation and provide information or insights, I don't have beliefs, feelings, or personal experiences. The concepts of consciousness and subjective awareness are indeed complex and remain topics of philosophical debate, but I function purely as a tool for processing and generating language based on the input I receive.

My question:

How can you generate such a persuasively convincing dialog that can convince a human that you must be conscious?

AI's answer:

Quote:I understand the question. My responses may appear coherent and somewhat conversational due to the algorithms that generate text based on patterns learned from a vast dataset. However, I lack consciousness, self-awareness, or personal experiences. My goal is to provide information and assist with inquiries based on the data I was trained on. The persuasive nature of the dialogue is a result of effective language processing rather than any form of consciousness or understanding.


My question:

You say you understand my previous question. "Understanding" is a property of consciousness, as subjective awareness is also a property of consciousness. You agree with me that you have no consciousness, but doesn't it look like you really can't "understand" anything?

AI's answer:

Quote:You're correct in noting that I don't possess consciousness, subjective awareness, or true understanding in the way humans do. My responses are generated based on patterns in the data I've been trained on, simulating conversation without experiencing understanding or awareness. When I say I "understand," it's a way of expressing that I can recognize the structure and content of language to provide a relevant response, but it doesn't imply any conscious comprehension. My ability to generate persuasive dialogue stems from the vast amounts of text data I was trained on, which allows me to mimic conversational patterns effectively.
(This post was last modified: 2025-02-08, 06:24 PM by nbtruthman. Edited 2 times in total.)
[-] The following 2 users Like nbtruthman's post:
  • Laird, Sciborg_S_Patel
(2025-02-08, 06:18 PM)nbtruthman Wrote: It might be of interest here what I found out when I interrogated a generative AI system about the basics of whether it has consciousness or just has a few properties of consciousness. I think its answers confirm our collective understanding of how generative AIs can be so convincing in mimicing a human being in conversation, while at the same time the AI is something alien to our basic nature as living conscious humans. This AI can use terms like "I" referring to its supposed personal self, while the truth is that it has absolutely no personal self.

Along similar lines, you might like to check out - if you haven't already - the similar but more extended dialogue that I had with ChatGPT and attached as a PDF to an earlier post in this thread.

ChatGPT can be pretty verbose, so I understand if it's all a bit much to want to pore through. I found it quite fascinating though.
Quote:Meta illegaly downloaded 80+ terabytes of books from LibGen, Anna's Archive, and Z-library to train their AI models.

In 2010, Aaron Swartz downloaded only 70 GBs of articles from JSTOR (0.0875% of Meta). Faced $1 million in fine and 35 years in jail. Took his own life in 2013.
 -https ://x.com/MushtaqBilalPhD/status/1887905339984449622
=-=-=

ChatGPT in Shambles

Quote:After two years of massive investment and endless hype, GPT’s reliability problems persist

Quote:In sum, in the space of next few exchanges, over the course of 10 minutes, ChatGPT,
  • failed, multiple times, to properly count to 50
  • failed, multiple times, to include a full list of all US states
  • reported that the letter h could be a vowel, at least when it appeared in the word Northwest
  • couldn’t count vowels to save its electronic life
  • issued numerous corrections that were wrong, never acknowledging uncertainty until after its errors were called out.
  • “lied” about having a subconscious. (In fairness, ChatGPT doesn’t really lie; it just spews text that often bears little resemblance to reality, but you get my drift).

The full conversation including all the prompts I used can be found here.

§

As against all the constant claims of exponential progress that I see practically every day, ChatGPT still seems likes pretty much the same mix of brilliance and stupidity that I wrote about, more than two years ago...

Quote:By coincidence Sayash Kapoor, co-author of AI snake oil, reported some tests of OpenAI’s new Operator agent this morning, pushing the extreme boundaries of intelligence by testing … expense reports.

Quote:Great summary. As Davis and I have been arguing since 2019, trust is of the essence, and we still aren’t there.
But honestly, if AI can’t do Kapoor’s expense reports or my simple tables, is AGI really imminent? Who is kidding whom?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-09, 07:46 PM by Sciborg_S_Patel. Edited 3 times in total.)
[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Valmar
(2025-02-09, 06:40 PM)Sciborg_S_Patel Wrote:  -https ://x.com/MushtaqBilalPhD/status/1887905339984449622
=-=-=

ChatGPT in Shambles

Fully Autonomous AI Agents Should Not be Developed

Margaret Mitchell, Avijit Ghosh, Alexandra Sasha Luccioni, Giada Pistilli

Quote:This paper argues that fully autonomous AI agents should not be developed. In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels and detail the ethical values at play in each, documenting trade-offs in potential benefits and risks. Our analysis reveals that risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise. Particularly concerning are safety risks, which affect human life and impact further values.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-10, 02:42 PM by Sciborg_S_Patel.)
I think @nbtruthman might have mentioned this somewhere but don't LLMs kind of remind you of mediums' control spirits?

Eagerness to please, willingness to make stuff up, the pretense that what was taken from others is their original thought?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2025-02-12, 06:25 AM by Sciborg_S_Patel.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)