AI megathread

195 Replies, 7939 Views

An excellent new article has just come out in The New Yorker on why present generative A.I. technology isn't ever going to be able to make real art (or real literature).

https://www.newyorker.com/culture/the-we...o-make-art

I think it is unfortunate that the unvarnished truth is that a great many human consumers of these productions are probably not caring, primarily because these works generated this way are cheaper, and they are too unsophisticated to tell the differences. 

A few key excerpts from this lengthy and eloquent and insightful essay:

Quote:"The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying.

Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate than the iphone.

It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.
...................................
ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something.
...................................
The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.
...................................
Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.

Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable; the fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new. We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise."
(This post was last modified: 2024-09-05, 03:51 PM by nbtruthman. Edited 1 time in total.)
[-] The following 4 users Like nbtruthman's post:
  • Ninshub, Typoz, Laird, Sciborg_S_Patel
(2024-09-04, 10:56 PM)Valmar Wrote: Also... the AI runs calculations for all possible moves, whereas the human mind does not need to consider all possibilities, just the relevant ones. Besides that, brains don't function like computers, so the power differences are quite meaningless.

I don't know if the power differences are "meaningless".

If nothing else it suggests even the structure of the brain is different than a usable implementation of a Turing Machine.

Admittedly I'm one of those weird proponents who does think we could instantiate consciousness by creating synthetic life, just that said life would need to have as yet unknown structures akin to whatever it is that lets our brains instantiate consciousness.

(I use "instantiate" here because I don't think a brain produces consciousness, yet nonetheless I think brains are important to having localized first person PoVs in this universe.)
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2024-09-04, 10:56 PM)Valmar Wrote: Also... the AI runs calculations for all possible moves

Generally, that's not true. Most of the time that's infeasible even for the most powerful of current computers.
[-] The following 2 users Like Laird's post:
  • nbtruthman, Sciborg_S_Patel
I just encountered an example of the generative AI "hallucinations" often reported to be produced by ChatGPT4 and others. I tried to use Message Bree AI to generate a text transcript of the closed caption subtitle texts accompanying a long YouTube video. Well, this AI said it couldn't do that itself, but came up with a long procedure it claimed I could use to accomplish this myself using an available app, which it furnished the Internet address for. I tried the procedure, and discovered that the address was phony and the app was fictitious. No such app existed, and this AI had dreamed it up in order to answer the question. And this AI was so cooperative, friendly and forthcoming of plausible answers - this only showed how dangerous it can be to use. Luckily, in this case the imaginative lie was only too obvious. You just can't believe these things.
(This post was last modified: 2024-09-05, 11:12 PM by nbtruthman. Edited 1 time in total.)
[-] The following 4 users Like nbtruthman's post:
  • Valmar, Laird, Typoz, Sciborg_S_Patel
(2024-09-05, 11:08 PM)nbtruthman Wrote: I just encountered an example of the generative AI "hallucinations" often reported to be produced by ChatGPT4 and others. I tried to use Message Bree AI to generate a text transcript of the closed caption subtitle texts accompanying a long YouTube video. Well, this AI said it couldn't do that itself, but came up with a long procedure it claimed I could use to accomplish this myself using an available app, which it furnished the Internet address for. I tried the procedure, and discovered that the address was phony and the app was fictitious. No such app existed, and this AI had dreamed it up in order to answer the question. And this AI was so cooperative, friendly and forthcoming of plausible answers - this only showed how dangerous it can be to use. Luckily, in this case the imaginative lie was only too obvious. You just can't believe these things.

Were you able to challenge the AI by pointing out the issues with its proposed solution, or had the session already ended by the time you tried it out?
[-] The following 2 users Like Typoz's post:
  • Sciborg_S_Patel, Laird
(2024-09-06, 08:59 AM)Typoz Wrote: Were you able to challenge the AI by pointing out the issues with its proposed solution, or had the session already ended by the time you tried it out?

The latter. Since then I have given up in disgust from trying to use AI systems to answer questions requiring research.
[-] The following 4 users Like nbtruthman's post:
  • Valmar, Jim_Smith, Sciborg_S_Patel, Typoz
(2024-09-06, 02:50 PM)nbtruthman Wrote: The latter. Since then I have given up in disgust from trying to use AI systems to answer questions requiring research.

I feel the same way. 

But with some of the search engines, one key word might give me a lot of hits that have nothing to do with my other keywords. 

And I find some of the AI's that provide links can be better than a search engine at finding links that address the subject of my question (or list of keywords) - even if the links don't say what the AI says they do it can be more useful than just a search engine. (I look at the links to see what they really say.)

The AI seems to "understand" my questions better than a search engine that uses something like hit frequency to feed me the most popular sites.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
[-] The following 2 users Like Jim_Smith's post:
  • Sciborg_S_Patel, Valmar
Bad times for human artists because of generative AI 

A new article about this, at https://www.cartoonbrew.com/artist-right...43035.html

A sign of the times. Here's the first large Hollywood entertainment corporation to figure it can drastically cut costs and increase profits by openly axing most all of its artists and illustrators, by getting an advanced generative AI trained on its own many productions (no copyright violation problems or useage fees). The production/distribution company is Lionsgate. Hooray for unshackled naked capitalism.

One question of many that come to mind: what will happen to the quality of these new AI-aided entertainment productions (movies and TV shows)? So far, generative AI systems can generate new "creative" output only for so long, then the quality degenerates due to there being no new training data. They feed on new training data. So then the issue will be how long the public will accept progressively deteriorating visual effects in their movies and tv shows before they retaliate in the box office? Unfortunately the track record isn't good, since the public seems to accept being "trained" to slowly accept deteriorative changes initiated because they are profitable new high technology.

Quote:"Lionsgate has become the first significant Hollywood studio to go all-in on AI. The company today announced a “first-of-its-kind” partnership with AI research company Runway to create and train an exclusive new AI model based on its portfolio of film and tv content.

Lionsgate’s exclusive model will be used to generate what it calls “cinematic video” which can then be further iterated using Runway’s technology. The goal is to save money – “millions and millions of dollars” according to Lionsgate studio vice chairman Michael Burns – by having filmmakers and creators use its AI model to replace artists in production tasks such as storyboarding.
.................................................
....(The company) envisions the tool as a way to eventually replace vfx artists, and wants the model to be used to create backgrounds and special effects."
[-] The following 3 users Like nbtruthman's post:
  • Typoz, Laird, Sciborg_S_Patel
(2024-09-04, 10:01 AM)Jim_Smith Wrote: One difference between biological intelligence and artificial intelligence is that biological intelligence uses much less energy.

https://www.scientificamerican.com/artic...ectricity/

https://www.popsci.com/environment/three...microsoft/

Quote:Massive AI energy demand is bringing Three Mile Island back from the dead
Microsoft will be the sole purchaser of energy generated from the refurbished site.

Power-hungry generative AI models are quickly making Big Tech sizable energy requirements even more demanding and forcing companies to seek out energy from unlikely places. While Meta and Google are exploring modern geothermal tech and other newer experimental energy sources, Microsoft is stepping back in time. This week, the company signed a 20-year-deal to source energy from the storied Three Mile Island nuclear facility in Pennsylvania, a site once known for the worst reactor accident in US history.
The first gulp from the glass of science will make you an atheist, but at the bottom of the glass God is waiting for you - Werner Heisenberg. (More at my Blog & Website)
[-] The following 2 users Like Jim_Smith's post:
  • nbtruthman, Sciborg_S_Patel


Quote:Have we discovered an ideal gas law for AI?
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Typoz, Silence

  • View a Printable Version
Forum Jump:


Users browsing this thread: 4 Guest(s)