AI megathread

176 Replies, 6184 Views

An excellent new article has just come out in The New Yorker on why present generative A.I. technology isn't ever going to be able to make real art (or real literature).

https://www.newyorker.com/culture/the-we...o-make-art

I think it is unfortunate that the unvarnished truth is that a great many human consumers of these productions are probably not caring, primarily because these works generated this way are cheaper, and they are too unsophisticated to tell the differences. 

A few key excerpts from this lengthy and eloquent and insightful essay:

Quote:"The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate than the iphone.

It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.
...................................
ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something.
...................................
The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.
...................................
Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.

Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable; the fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new. We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise."
(This post was last modified: 2024-09-05, 03:51 PM by nbtruthman. Edited 1 time in total.)
[-] The following 4 users Like nbtruthman's post:
  • Ninshub, Typoz, Laird, Sciborg_S_Patel
(2024-09-04, 10:56 PM)Valmar Wrote: Also... the AI runs calculations for all possible moves, whereas the human mind does not need to consider all possibilities, just the relevant ones. Besides that, brains don't function like computers, so the power differences are quite meaningless.

I don't know if the power differences are "meaningless".

If nothing else it suggests even the structure of the brain is different than a usable implementation of a Turing Machine.

Admittedly I'm one of those weird proponents who does think we could instantiate consciousness by creating synthetic life, just that said life would need to have as yet unknown structures akin to whatever it is that lets our brains instantiate consciousness.

(I use "instantiate" here because I don't think a brain produces consciousness, yet nonetheless I think brains are important to having localized first person PoVs in this universe.)
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2024-09-04, 10:56 PM)Valmar Wrote: Also... the AI runs calculations for all possible moves

Generally, that's not true. Most of the time that's infeasible even for the most powerful of current computers.
[-] The following 2 users Like Laird's post:
  • nbtruthman, Sciborg_S_Patel
I just encountered an example of the generative AI "hallucinations" often reported to be produced by ChatGPT4 and others. I tried to use Message Bree AI to generate a text transcript of the closed caption subtitle texts accompanying a long YouTube video. Well, this AI said it couldn't do that itself, but came up with a long procedure it claimed I could use to accomplish this myself using an available app, which it furnished the Internet address for. I tried the procedure, and discovered that the address was phony and the app was fictitious. No such app existed, and this AI had dreamed it up in order to answer the question. And this AI was so cooperative, friendly and forthcoming of plausible answers - this only showed how dangerous it can be to use. Luckily, in this case the imaginative lie was only too obvious. You just can't believe these things.
(This post was last modified: 2024-09-05, 11:12 PM by nbtruthman. Edited 1 time in total.)
[-] The following 4 users Like nbtruthman's post:
  • Valmar, Laird, Typoz, Sciborg_S_Patel
(2024-09-05, 11:08 PM)nbtruthman Wrote: I just encountered an example of the generative AI "hallucinations" often reported to be produced by ChatGPT4 and others. I tried to use Message Bree AI to generate a text transcript of the closed caption subtitle texts accompanying a long YouTube video. Well, this AI said it couldn't do that itself, but came up with a long procedure it claimed I could use to accomplish this myself using an available app, which it furnished the Internet address for. I tried the procedure, and discovered that the address was phony and the app was fictitious. No such app existed, and this AI had dreamed it up in order to answer the question. And this AI was so cooperative, friendly and forthcoming of plausible answers - this only showed how dangerous it can be to use. Luckily, in this case the imaginative lie was only too obvious. You just can't believe these things.

Were you able to challenge the AI by pointing out the issues with its proposed solution, or had the session already ended by the time you tried it out?
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, Laird
(2024-09-06, 08:59 AM)Typoz Wrote: Were you able to challenge the AI by pointing out the issues with its proposed solution, or had the session already ended by the time you tried it out?

The latter. Since then I have given up in disgust from trying to use AI systems to answer questions requiring research.
[-] The following 4 users Like nbtruthman's post:
  • Valmar, Jim_Smith, Sciborg_S_Patel, Typoz
(2024-09-06, 02:50 PM)nbtruthman Wrote: The latter. Since then I have given up in disgust from trying to use AI systems to answer questions requiring research.

I feel the same way. 

But with some of the search engines, one key word might give me a lot of hits that have nothing to do with my other keywords. 

And I find some of the AI's that provide links can be better than a search engine at finding links that address the subject of my question (or list of keywords) - even if the links don't say what the AI says they do it can be more useful than just a search engine. (I look at the links to see what they really say.)

The AI seems to "understand" my questions better than a search engine that uses something like hit frequency to feed me the most popular sites.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
[-] The following 1 user Likes Jim_Smith's post:
  • Valmar

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)