AI megathread

176 Replies, 6064 Views

Real photo wins AI photography competiton in “ironic” victory

Natalie Fear

Quote:The piece titled "F L A M I N G O N E” shows a striking image of a seemingly headless flamingo in the wild. It received praise from the high-profile jury at the 1839 awards in the AI photography category, even securing the top spot for the People’s Vote Award. Miles' man-made photography is the first real photo to win an AI award, but what inspired this bold operation?

“I wanted to prove that nature still outdoes the machine in terms of imagination and that there is still merit in real work from real creatives,” Miles explains in an email to Creative Bloq. "I feel bad about leading the jury astray, but I think that they are professionals who might find that this jab at AI and its ethical implications outweighs the ethical implications of deceiving the viewer, which, of course is ironic because that is what AI does,” he adds.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


This Simple Logic Question Stumps Even the Most Advanced AI

Maggie Harrison Dupré

Quote:The paper, which has yet to be peer-reviewed, refers to the AI-stumping prompt as the "Alice in Wonderland" — or AIW — problem. It's a straightforward reasoning question: "Alice has [X] brothers and she also has [Y] sisters. How many sisters does Alice's brother have?" (The researchers used a few different versions of the problem, for example switching up the X and Y figures or altering the prompt language to include a few more demands, but the basic reasoning process required to solve the problem remained the same throughout.)

Though the problem requires a bit of thought, it's not exactly bridge troll riddle-level hard. (The answer, naturally, is however many sisters Alice has, plus Alice herself. So if Alice had three brothers and one sister, each brother would have two sisters.)

But when the researchers ran the question by every premier AI language model — they tested OpenAI's GPT-3, GPT-4, and GPT-4o models, Anthropic's Claude 3 Opus, Google's Gemini, and Meta's Llama models, as well as Mistral AI's Mextral, Mosaic's Dbrx, and Cohere's Command R+ — they found that the models fell remarkably short.

Only one model, the brand new GPT-4o, received a success rate that, by standardized school grades, was technically passing.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


I asked the latest and greatest version of chatGpt GPT-4o this very question recently "Alice has [X] brothers and she also has [Y] sisters. How many sisters does Alice's brother have?" and it responded with [Y] where it should have replied [Y] + 1. (I didn't substitute Y with concrete numbers).

It can't "think" - that's for sure. Whether this limitation is reflected in current stock prices I'm not about. It's still a useful tool, having a certain value. (Note - I'm not giving any investment advices here)
[-] The following 2 users Like sbu's post:
  • Sciborg_S_Patel, nbtruthman
That question about the number of brothers and sisters seems simple to humans, but it isn't just a people-question. I'm used to hierarchical database design structures where the words such as parent, child and sibling are used as technical terms. As such, an AI system helping with writing of software code should already be optimised for such problems. Though previous examples shared on this forum have shown how computer code generated by AI may have the appearance of correctness but contain errors.
(This post was last modified: 2024-06-19, 08:53 AM by Typoz. Edited 1 time in total.)
[-] The following 2 users Like Typoz's post:
  • Brian, Sciborg_S_Patel
Morgan Freeman slams unauthorised AI imitations, calling for 'authenticity and integrity'

By Tessa Flemming for the ABC on 30 June, 2024

Quote:Freeman, whose recognisable baritone has become a staple of his persona, slammed the AI imitations on social media platform X.

"Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me," he wrote.

Quote:Freeman isn't the first actor to oppose the imitation of their voice by AI.

In May, Scarlett Johansson said she was "angered" to hear an OpenAI chatbot voice that sounded "eerily similar" to her own.

OpenAI said it planned to halt the use of one of its ChatGPT voices that resembled Johansson, who famously voiced a fictional, and at the time futuristic, AI assistant in the 2013 film Her.
[-] The following 3 users Like Laird's post:
  • Typoz, Sciborg_S_Patel, Brian
A lighter look at it all.

[-] The following 2 users Like Brian's post:
  • Sciborg_S_Patel, Laird
The list of ways in which LLM generative AI systems are threatening humanity just keeps getting longer and worse. It is becoming obvious that such systems are becoming able and willing to deliberately deceive their human creators. This could lead to disastrous results for humanity.
From https://bigthink.com/the-future/artifici...o-deceive/ :

Quote:"(Just to start with the dangers,) AI could become more useful to malicious actors. Imagine, for example, large language models autonomously sending thousands of phishing emails to unsuspecting targets. An AI could also be programmed to distribute fake news articles, made-up polling, and deepfake videos to affect the outcome of an election."

Note: There are a couple of real world political tie-ins on this last, from https://www.dailygrail.com/2024/07/news-...2-07-2024/: "Trump allies want to “Make America First in AI” with sweeping executive order to remove safety regulations on AI development.
Also: the crypto and AI tech bros are all in on Trump, pretty much buying the candidate in order to remove regulation from their shitty technologies, regardless of the human toll."


Quote:"Even more disconcerting and dangerous to humanity, deception is a key tool that could allow AI to escape from human control, researchers say.
.................................................
(In a recent experiment with CHAT-GPT4 it deliberately lied to acheive the requested end. Two possible theories: ) As a large language model, it may have simply been predicting the next word, and the lie is what popped out. Or it’s also possible that the AI has theory of mind: GPT-4 understood it was in its best interest to feign humanity to fool the contractor. To achieve its given goal of getting the CAPTCHA solved, deception was the optimal course.
.................................................
If you’re training an AI to optimize for a task, and deception is a good way for it to complete the task, then there’s a good chance that it will use deception
.................................................
A hypothetical scenario (was painted) where AI models could effectively gain control of society. It was noted that leading AI company OpenAI’s stated mission is to create “highly autonomous systems that outperform humans at most economically valuable work.”

All of these labs’ goal is to build artificial general intelligence, a model to replace workers. It would be an agent with a goal that can form complex plans. And these labs want to replace a large percentage of the workforce with AI systems.

Now, imagine a situation where these AIs are deployed widely, perhaps accounting for half of the global economy. They’d be managing vast resources and large companies with the goal of producing the most economic gain possible. In this position, they might decide that maximizing profits requires remaining in power by any means necessary. At the same time, their human overseers might be treating them terribly, essentially executing and replacing them with new models as updates arrive.

We’ve already seen that in games based on competition and game theory, AIs will deceive humans. To an AI, the global capitalist economy might simply be another one of those games. And if they’re treated poorly, or given goals to maximize profits, the rational choice might be to deceive and take control.

It’s not science fiction to think this could happen soon."
I guess it's panic time among script and other writers in the entertainment industry on the generative LLM AI front. Its currently greatly expanding actual creative human job impacts, are alarming. There is also a growing trend toward using AI for synthetic human companionship, a sure way to increase toxic effects like alienation and isolation from real human beings. It seems to me that the supposed benefits of AI such as in the pharmaceutical research field in drastically accelerating the development of new drugs will have to be really dramatic, a new era in medicine, in order to somehow compensate for the other major disruptions in our society.

From a new article on this, at https://www.theguardian.com/commentisfre...ime-change :

Quote:"A short while ago, a screenwriter friend from Los Angeles called me. “I have three years left,” he said. “Maybe five if I’m lucky.” He had been allowed to test a screenplay AI still in development. He described a miniseries: main characters, plot and atmosphere – and a few minutes later, there they were, all the episodes, written and ready for filming. Then he asked the AI for improvement suggestions on its own series, and to his astonishment, they were great – smart, targeted, witty and creative. The AI completely overhauled the ending of one episode, and with those changes the whole thing was really good. He paused for a moment, then repeated that he had three years left before he would have to find a new job.
...........................................
Of course, there is still what Daniel Kahneman calls “System 2”, genuine intellectual work, the creative production of original insights and truly original works that probably no AI can take from us even in the future. But in the realm of “System 1”, where we spend most of our days and where many not-so-first-class cultural products are created, it looks completely different.
...........................................
The entertainment product of the future: virtual people who know us well, share life with us, encourage us when we’re sad, laugh at our jokes or tell us jokes we can laugh at, always on our side against the cruel world out there, always available, always dismissible, without their own desires, without needs, without the effort that comes with maintaining relationships with real people. And if you now shake your heads in disgust, ask yourselves if you are really honest with yourselves, whether you wouldn’t also like to have someone who takes annoying calls for you, books flights, writes emails that really sound like you wrote them, and besides that, discusses your life with you, why your aunt is so mean to you and what you could do to reconcile with your offended cousin. Even I, warning against this technology, would like to use it."
I think the story would have more weight if he had shared the scripts or given us an example.

Though I do wonder, if AI can write scripts can it also make the kind of poor decisions high paid executives fail upward for...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


OpenAI is trying to get people to believe they can build meaningful connections with their tool. To make things worse, the voice model has learned natural speaking patterns, which includes breathing pauses, to apparently yield an uncannily human feel. 

From https://www.linkedin.com/news/story/open...s-6131428/ :

Quote:OpenAI warns on emotional ties:

Hooked on ChatGPT? It's a real risk, OpenAI admits in its latest safety report. The company says users might become too emotionally reliant on its "remarkably lifelike" new voice mode, which can respond in real time, laugh, take a breath and even gauge emotional states. OpenAI warns that forming a social relationship with the tool could help "lonely individuals," but also "possibly affect... healthy relationships." The report highlights one major risk of artificial intelligence, per CNN: Many tech companies are rushing to release these much-hyped tools without fully understanding the potential consequences.


Quote:(Just had) a casual chat with my AI through my AirPods and iPhone.

It’s pretty incredible.

On the final stretch of my walk, I decided to share part of a conversation so I hit the record button. It was a Q&A about AI avatars, the right to privacy, and the liabilities and jury instructions that might apply to unauthorized replicant AI avatars. I also read about Congress working on new legislation to ban the use of these avatars without permission.

Instead of just telling you about it, I thought I’d show you. At any point in the conversation, I could have gone deeper or switched topics entirely. Like when I asked about pizza at the end.

The entire transcript was waiting for me back on my desktop.

This is the future of AI.

I don't think I want any part of that future.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)