AI megathread

134 Replies, 2535 Views

An astronomer has now suggested a new solution to the Fermi Paradox; which paradox is the observational question of, if life is common in the Universe then where are they? All we see is emptiness as far as extraterrestrial lifeforms are concerned.

The innovative and timely solution detailed in a new article in Iflscience implies that the "great filter" may still lie in our near future, and is due to the demise of our civilization caused by the development of Artificial General Intelligence (AGI) or even beyond that, Artificial Superintelligence (ASI).

https://www.iflscience.com/new-solution-...n-us-73774

Quote:"First, a little background on the so-called Great Filter. With 200 billion trillion (ish) stars in the universe and 13.7 billion years that have elapsed since it all began, you might be wondering where all the alien civilizations are at. This is the basic question behind the Fermi paradox, the tension between our suspicions of the potential for life in the universe (given planets found in habitable zones, etc) and the fact that we have only found one planet with an intelligent (ish) species inhabiting it.

One solution, or at least a way of thinking about the problem, is known as the Great Filter. Proposed by Robin Hanson of the Future of Humanity Institute at Oxford University, the argument goes that given the lack of observed technologically advanced alien civilizations, there must be a great barrier to the development of life or civilization that prevents them from getting to a stage where they are making big, detectable impacts on their environment that we can witness from Earth.
.....................................................
In a new paper Michael Garrett, Sir Bernard Lovell chair of Astrophysics at the University of Manchester and the Director of the Jodrell Bank Centre for Astrophysics, outlines how the emergence of artificial intelligence (AI) could lead to the destruction of alien civilizations.
.....................................................
"Even before AI becomes superintelligent and potentially autonomous, it is likely to be weaponized by competing groups within biological civilizations seeking to outdo one another," Garrett writes in the paper. "The rapidity of AI's decision-making processes could escalate conflicts in ways that far surpass the original intentions. At this stage of AI development, it's possible that the wide-spread integration of AI in autonomous weapon systems and real-time defence decision making processes could lead to a calamitous incident such as global thermonuclear war, precipitating the demise of both artificial and biological technical civilizations."

When AI leads to Artificial Superintelligence (ASI), the situation could get much worse.

"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," Garrett continues. "The practicality of sustaining biological entities, with their extensive resource needs such as energy and space, may not appeal to an ASI focused on computational efficiency—potentially viewing them as a nuisance rather than beneficial. An ASI, could swiftly eliminate its parent biological civilisation in various ways, for instance, engineering and releasing a highly infectious and fatal virus into the environment.""

Thus, The Great Filter hypothesis being actually the case could be the reason for the continued-to-be-observed Fermi Paradox.

However, in opposition to this "AGI/ASI run amuck" theory, it seems fairly certain to me that the fundamental limitations of AGI related to the fundamental limitations of computational processes themselves will prevent development of anything like truly conscious ASI capable of and inclined to eliminate mankind.

I think there are two much more likely sources for the observed Fermi Paradox: One more likely source is that the lack so far of any at all observed extraterrestrial civilizations (with the very controversial outliers UFOs/UAPs aside) could simply be due to there being almost zero probability that life will spontaneously evolve, or transition to multicellular animals, or develop intelligence. I think this possibility is much more likely than the Great Filter hypothesis, this lack of evidence for extraterrestrial intelligence being basically the most likely conclusion of various sciences such as evolutionary biology and the search for viable mechanisms for the spontaneous evolution of DNA and of the first living organisms. The latter research into the spontaneous origin of the first living cells continues to be basically an abysmal failure because of it being an extremely statistically improbable event, implying that very probably life is extremely rare in the Universe, of course barring supernatural sources. Even to the point where Earth is the only life in our Milky Way Galaxy.

To amplify that, it can also be observed that the fossil record and evolutionary biology indicates that the evolution of complex intelligent life on Earth has apparently depended on numerous successive extremely low probability evolutionary events, that theoretically would only very rarely happen in other planetary systems' history.
(This post was last modified: 2024-04-15, 04:37 PM by nbtruthman. Edited 4 times in total.)
AI ‘LooksRater' app blamed for rise in bullying at school

https://www.swindonadvertiser.co.uk/news/24254740.ai-looksrater-app-blamed-rise-bullying-school/


Quote:The app enables children to upload photos of themselves or their peers to be rated out of 10. If a low score is generated for their photo, users have their flaws pointed out to them.
Quote:One parent of a child who attends the school in Blunsdon said: “It's horrible. If you thought Instagram was doing damage to our kid's self-image and self-esteem you should see the effect this has. My daughter was crying because some of the boys in her class put her on the app and shared her score. It's just awful.
(This post was last modified: 2024-04-17, 08:47 PM by Brian. Edited 1 time in total.)
[-] The following 3 users Like Brian's post:
  • Laird, stephenw, Sciborg_S_Patel
Do you think they will be stupid enough to put AI in charge of nuclear weapons eventually?

[-] The following 1 user Likes Brian's post:
  • Sciborg_S_Patel
(2024-04-17, 08:45 PM)Brian Wrote: AI ‘LooksRater' app blamed for rise in bullying at school

https://www.swindonadvertiser.co.uk/news...ng-school/

Unsurprising, and my guess is the company would love this as it means more usage.

The same type of people doing this are trying to force us to accept driverless cars killing people and AI "art" leaving real hardworking and talent people jobless.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Brian
Meta claims its newest AI model beats some peers. But its amped-up AI agents are confusing Facebook users

By AP on the ABC on 19 April, 2024

Quote:An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan mums, claiming that it, too, had a child in the New York City school district.

Confronted by group members, it later apologised before the comments disappeared, according to a series of screenshots shown to The Associated Press.

"Apologies for the mistake! I'm just a large language model, I don't have experiences or children," the chatbot told the group.
[-] The following 5 users Like Laird's post:
  • Silence, stephenw, Brian, Typoz, Sciborg_S_Patel

  • View a Printable Version


Users browsing this thread: 1 Guest(s)