AI megathread

177 Replies, 6373 Views

An astronomer has now suggested a new solution to the Fermi Paradox; which paradox is the observational question of, if life is common in the Universe then where are they? All we see is emptiness as far as extraterrestrial lifeforms are concerned.

The innovative and timely solution detailed in a new article in Iflscience implies that the "great filter" may still lie in our near future, and is due to the demise of our civilization caused by the development of Artificial General Intelligence (AGI) or even beyond that, Artificial Superintelligence (ASI).

https://www.iflscience.com/new-solution-...n-us-73774

Quote:"First, a little background on the so-called Great Filter. With 200 billion trillion (ish) stars in the universe and 13.7 billion years that have elapsed since it all began, you might be wondering where all the alien civilizations are at. This is the basic question behind the Fermi paradox, the tension between our suspicions of the potential for life in the universe (given planets found in habitable zones, etc) and the fact that we have only found one planet with an intelligent (ish) species inhabiting it.

One solution, or at least a way of thinking about the problem, is known as the Great Filter. Proposed by Robin Hanson of the Future of Humanity Institute at Oxford University, the argument goes that given the lack of observed technologically advanced alien civilizations, there must be a great barrier to the development of life or civilization that prevents them from getting to a stage where they are making big, detectable impacts on their environment that we can witness from Earth.
.....................................................
In a new paper Michael Garrett, Sir Bernard Lovell chair of Astrophysics at the University of Manchester and the Director of the Jodrell Bank Centre for Astrophysics, outlines how the emergence of artificial intelligence (AI) could lead to the destruction of alien civilizations.
.....................................................
"Even before AI becomes superintelligent and potentially autonomous, it is likely to be weaponized by competing groups within biological civilizations seeking to outdo one another," Garrett writes in the paper. "The rapidity of AI's decision-making processes could escalate conflicts in ways that far surpass the original intentions. At this stage of AI development, it's possible that the wide-spread integration of AI in autonomous weapon systems and real-time defence decision making processes could lead to a calamitous incident such as global thermonuclear war, precipitating the demise of both artificial and biological technical civilizations."

When AI leads to Artificial Superintelligence (ASI), the situation could get much worse.

"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," Garrett continues. "The practicality of sustaining biological entities, with their extensive resource needs such as energy and space, may not appeal to an ASI focused on computational efficiency—potentially viewing them as a nuisance rather than beneficial. An ASI, could swiftly eliminate its parent biological civilisation in various ways, for instance, engineering and releasing a highly infectious and fatal virus into the environment.""

Thus, The Great Filter hypothesis being actually the case could be the reason for the continued-to-be-observed Fermi Paradox.

However, in opposition to this "AGI/ASI run amuck" theory, it seems fairly certain to me that the fundamental limitations of AGI related to the fundamental limitations of computational processes themselves will prevent development of anything like truly conscious ASI capable of and inclined to eliminate mankind.

I think there are two much more likely sources for the observed Fermi Paradox: One more likely source is that the lack so far of any at all observed extraterrestrial civilizations (with the very controversial outliers UFOs/UAPs aside) could simply be due to there being almost zero probability that life will spontaneously evolve, or transition to multicellular animals, or develop intelligence. I think this possibility is much more likely than the Great Filter hypothesis, this lack of evidence for extraterrestrial intelligence being basically the most likely conclusion of various sciences such as evolutionary biology and the search for viable mechanisms for the spontaneous evolution of DNA and of the first living organisms. The latter research into the spontaneous origin of the first living cells continues to be basically an abysmal failure because of it being an extremely statistically improbable event, implying that very probably life is extremely rare in the Universe, of course barring supernatural sources. Even to the point where Earth is the only life in our Milky Way Galaxy.

To amplify that, it can also be observed that the fossil record and evolutionary biology indicates that the evolution of complex intelligent life on Earth has apparently depended on numerous successive extremely low probability evolutionary events, that theoretically would only very rarely happen in other planetary systems' history.
(This post was last modified: 2024-04-15, 04:37 PM by nbtruthman. Edited 4 times in total.)
AI ‘LooksRater' app blamed for rise in bullying at school

https://www.swindonadvertiser.co.uk/news/24254740.ai-looksrater-app-blamed-rise-bullying-school/


Quote:The app enables children to upload photos of themselves or their peers to be rated out of 10. If a low score is generated for their photo, users have their flaws pointed out to them.
Quote:One parent of a child who attends the school in Blunsdon said: “It's horrible. If you thought Instagram was doing damage to our kid's self-image and self-esteem you should see the effect this has. My daughter was crying because some of the boys in her class put her on the app and shared her score. It's just awful.
(This post was last modified: 2024-04-17, 08:47 PM by Brian. Edited 1 time in total.)
[-] The following 3 users Like Brian's post:
  • Laird, stephenw, Sciborg_S_Patel
Do you think they will be stupid enough to put AI in charge of nuclear weapons eventually?

[-] The following 2 users Like Brian's post:
  • stephenw, Sciborg_S_Patel
(2024-04-17, 08:45 PM)Brian Wrote: AI ‘LooksRater' app blamed for rise in bullying at school

https://www.swindonadvertiser.co.uk/news...ng-school/

Unsurprising, and my guess is the company would love this as it means more usage.

The same type of people doing this are trying to force us to accept driverless cars killing people and AI "art" leaving real hardworking and talent people jobless.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • stephenw, Brian
Meta claims its newest AI model beats some peers. But its amped-up AI agents are confusing Facebook users

By AP on the ABC on 19 April, 2024

Quote:An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan mums, claiming that it, too, had a child in the New York City school district.

Confronted by group members, it later apologised before the comments disappeared, according to a series of screenshots shown to The Associated Press.

"Apologies for the mistake! I'm just a large language model, I don't have experiences or children," the chatbot told the group.
[-] The following 5 users Like Laird's post:
  • Silence, stephenw, Brian, Typoz, Sciborg_S_Patel
There appear to be some new and very unhealthy trends in LLM AI systems.

Derived from https://futurism.com/internet-horrified-...ly-members

Quote:"...bizarre new app called "Vera AI" claims it allows its users to create copies of "your friends or family members," a puzzlingly brazen use of AI tech that doesn't even try to hide the fact that it's looking to replace human connection.

The app's marketing materials even suggest that it can even be used to "recreate someone you miss... and keep talking without limits" — strongly implying that it's designed to allow you to reconnect with dead relatives."
................................................................
"Whether you've lost a dear one or you simply want to get closer to someone you don't see often enough, Vera AI is the right app for you," a previous version of the app's description reads. "Recreate anybody you can think of & have real & intimate conversations with them."

All this is a prime example of where some AI developers have been contributing to increased alienation and other personality disorders by marketing LLM AI systems that are claimed to allow the user to contact and converse with deceased loved ones. Apart from communing with deceased relatives, AI chatbots are also allowing countless users to start relationships with "AI girlfriends" or "boyfriends," ranging from casual friendships to intimate romances on the app Replika. Allowing neurotic users to much more easily substitute computers and the Internet for actual human interaction.
[-] The following 2 users Like nbtruthman's post:
  • Silence, stephenw
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Laird
The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister

Jake Renzella & Vlada Rozova

[Image: file-20240515-18-v31rm9.jpg?ixlib=rb-4.1...crop&dpr=1]

Quote:If you search “shrimp Jesus” on Facebook, you might encounter dozens of images of artificial intelligence (AI) generated crustaceans meshed in various forms with a stereotypical image of Jesus Christ.

Some of these hyper-realistic images have garnered more than 20,000 likes and comments. So what exactly is going on here...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2024-05-20, 06:44 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 2 users Like Sciborg_S_Patel's post:
  • Laird, Brian
AI and the great training data robbery

https://thecritic.co.uk/ai-and-the-great-data-robbery/

Generative AI is threatening humanity on a number of fronts. The above is an interesting new article about the growing threat generative AI is becoming to creative artists of all kinds, especially writers and illustrators. With no regard to infringement laws, large language and video processing AI systems like ChatGPT are right now routinely raping the entire Internet for vast amounts of human-produced creative text and picture data including art for use as their absolutely necessary training data. Then they can produce endless imitations even in the style of the human creators, or other synthesized styles. As of yet there only are a bunch of copyright infringement cases slowly working their way through the court system, but apparently no major blowback yet on the parasitic operations of the big corporations doing the raping. The big corporations employ an army of lawyers and have friends in high places, and apparently are content to mostly ignore the threat of creative output copyright infringement lawsuits. 

Writers and artists especially are facing what looks like an existential crisis where their jobs and means of livelihood are on the chopping block. And when you think about it, when extended to full extent this trend of going to the cheapest source of written text and artistic forms (obviously AI), humanity might end up having to be content with endlessly recycled "artistic" output derived from old human creative outputs. A future of mostly nothing really new. 

Of course there would be the argument that I am sure has been advanced by some of the corporation lawyers, that human artistic and literary outputs are actually ultimately mechanistically algorithmically generated by deep levels of brain processing, so that human supposedly creative unique productions are really no newer than the generative AI's. I would ignore such claims as obviously ridiculous. 

Anyway, it turns out that some AI companies are taking the wave of infringement litigation against AI corporations seriously enough to try to get around the prospect of having to incur the large expenses of content licensing. One method being tried is to use masses of previous generative AI output as the training data for new AI systems, but apparently that just doesn't work. I guess this technology doesn't allow that kind of cheating.

Quote:"To obviate the necessity of licensing, models are being trained on the artificial output of AI itself. But this hasn’t gone very well. Researchers have found that AI models “collapse” or go “MAD” — Model Autophagy Disorder — in the words of one team who explicitly evoke the analogy of BSE (mad cow disease) to describe the cannibalistic process. It turns out they need fresh live human material after all. Nothing else will do."


I guess they can't cheat Mother Nature (that is, get around the need for real new creative human input).
(This post was last modified: 2024-05-27, 09:44 PM by nbtruthman. Edited 5 times in total.)
[-] The following 2 users Like nbtruthman's post:
  • Laird, Sciborg_S_Patel
Visual artists have help

https://amt-lab.org/reviews/2023/11/nigh...generators

Quote:How Does Nightshade Work?

Nightshade is able to confuse the pairings of words used by AI art generators by creating a false match between images and text. Zhao explained, "So it will, for example, take an image of a dog, alter it in subtle ways, so that it still looks like a dog to you and I — except to the AI, it now looks like a cat.”

https://glaze.cs.uchicago.edu/what-is-glaze.html

Quote:Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
[-] The following 3 users Like Brian's post:
  • nbtruthman, Laird, Sciborg_S_Patel

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)