AI megathread

195 Replies, 7813 Views

Google promised a better search experience — now it’s telling us to put glue on our pizza

Quote:Google’s AI Overviews feature is delivering incorrect answers faster than ever.

Kyle Robison

Quote:Imagine this: you’ve carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run into a problem — the cheese falls right off. Frustrated, you turn to Google for a solution.

“Add some glue,” Google answers. “Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”

So, yeah, don’t do that. As of writing this, though, that’s what Google’s new AI Overviews feature will tell you to do. The feature, while not triggered for every query, scans the web and drums up an AI-generated response. The answer received for the pizza glue query appears to be based on a comment from a user named “f**ksmith” in a more than decade-old Reddit thread, and they’re clearly joking.

This is just one of many mistakes cropping up in the new feature that Google rolled out broadly this month. It also claims that former US President James Madison graduated from the University of Wisconsin not once but 21 times, that a dog has played in the NBA, NFL, and NHL, and that Batman is a cop.

I censored the curse word as I believe it goes against forum rules though it was quoting a username.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 3 users Like Sciborg_S_Patel's post:
  • Brian, Typoz, Laird
(2024-06-01, 01:48 AM)Sciborg_S_Patel Wrote: I censored the curse word as I believe it goes against forum rules though it was quoting a username.

We don't have an explicit rule against swearing but the courtesy is appreciated.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel
It turns out that generative LLM-type AI systems may have a beneficial side that will be so drastic that it could outbalance the much feared dangers of this new technology. This has to do with such AI systems' potential ability to calculate previously incalculable biological analyses related to disease processes, and much truncate the presently long development time for new drugs for diverse ailments.

https://bigthink.com/the-future/alphafold-3/

Quote:"It currently takes about six years to complete the preclinical phase of drug development, and most clinical trials fail. Google recently unveiled AlphaFold 3, an AI that predicts “the structure and interactions of all life’s molecules.” AlphaFold 3’s ability to quickly model a variety of molecules and their interactions could accelerate drug development and improve success rates.

It’s 2040. You’re at your doctor’s office, and you just tested positive for that disease that killed your uncle. Just 10 years ago, the news would’ve been devastating, but in this hypothetical future, your doctor is able to prescribe a highly effective treatment — thanks to Google.

In May 2024, DeepMind — now called “Google DeepMind” — unveiled AlphaFold 3, which is able to predict the structure of both proteins and non-protein molecules, such as DNA and RNA, and how these molecules will bind to one another."

This capability if it turns out to be powerful enough might greatly improve the success rate of drugs that reach clinical trials — today, 90% of them fail, after years of it turns out useless effort. This could revolutionize medicine, with the result that we will now have a fine balance to be hopefully found between the dangers that such AI technology possibly can overcome and destroy its human creators or otherwise damage us, and the actually realized benefits of it.
[-] The following 4 users Like nbtruthman's post:
  • Laird, Silence, Sciborg_S_Patel, sbu
(2024-06-10, 02:37 PM)nbtruthman Wrote: It turns out that generative LLM-type AI systems may have a beneficial side that will be so drastic that it could outbalance the much feared dangers of this new technology. This has to do with such AI systems' potential ability to calculate previously incalculable biological analyses related to disease processes, and much truncate the presently long development time for new drugs for diverse ailments.

https://bigthink.com/the-future/alphafold-3/


This capability if it turns out to be powerful enough might greatly improve the success rate of drugs that reach clinical trials — today, 90% of them fail, after years of it turns out useless effort. This could revolutionize medicine, with the result that we will now have a fine balance to be hopefully found between the dangers that such AI technology possibly can overcome and destroy its human creators or otherwise damage us, and the actually realized benefits of it.

Seems like snake oil for now, but ideally it does prove helpful...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Brian
(2024-06-10, 05:05 PM)Sciborg_S_Patel Wrote: Seems like snake oil for now, but ideally it does prove helpful...

The great thing is that we don’t need to rely on philosophical arguments and anecdotal reports to have that matter settled. If it works we will know within a decade and it will be the next great thing following the revolution in consumer electronics.
[-] The following 1 user Likes sbu's post:
  • Sciborg_S_Patel
(2024-06-10, 07:15 PM)sbu Wrote: The great thing is that we don’t need to rely on philosophical arguments and anecdotal reports to have that matter settled. If it works we will know within a decade and it will be the next great thing following the revolution in consumer electronics.

Well sure, it's a matter of applied technology, so eventually we have to know whether it works. But the advertisements will definitely involve anecdotal reports, as will the investigative reporting for patients that are harmed.

You also make it seem like it will be a cut & dry matter that will absolutely not involve fraud, embellishment, regulatory manipulations, and all the other issues that exist in science as practiced.

We know certain adherents of the materialist evangelical faith will likely be out in force to overly promote this before it's ready, just as we saw many of that religion supporting the other questionable applications of machine "learning".
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Brian
(2024-06-10, 08:14 PM)Sciborg_S_Patel Wrote: Well sure, it's a matter of applied technology, so eventually we have to know whether it works. But the advertisements will definitely involve anecdotal reports, as will the investigative reporting for patients that are harmed.

You also make it seem like it will be a cut & dry matter that will absolutely not involve fraud, embellishment, regulatory manipulations, and all the other issues that exist in science as practiced.

We know certain adherents of the materialist evangelical faith will likely be out in force to overly promote this before it's ready, just as we saw many of that religion supporting the other questionable applications of machine "learning".

Back in the beginning of the 1980s there was also people who vehemently denied that the personal computer had any future. 

I suspect it will be the same with AI. 

Alphafold will be a great test of reductionistic science applied to biology.
(This post was last modified: 2024-06-10, 09:12 PM by sbu. Edited 2 times in total.)
[-] The following 1 user Likes sbu's post:
  • Sciborg_S_Patel
(2024-06-10, 09:08 PM)sbu Wrote: Back in the beginning of the 1980s there was also people who vehemently denied that the personal computer had any future. 

I suspect it will be the same with AI. 

Alphafold will be a great test of reductionistic science applied to biology.

The value of the personal computer is rather different than the evaluation of [what] machine "learning" can do.

Computers, as in electronic Turing Machines, have far broader applications by running different algorithms.

Machine "learning" is a subset of algorithms that definitely have applicability, the question is to what degree and at what cost. It is entirely possible that Alphafold will usher in a great improvement in medicine, but I think it's naive to think that Google's goal is to help humanity rather than find a way to make profits form themselves.

We already have seen this sort of grand promise making with driverless cars, a promise that has allowed companies to push back against what should have been expected limitations/regulations. Instead you have companies causing death due to government allowed experimentation on public roads.

Expect similar nefarious activity when it comes to Alphafold.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(This post was last modified: 2024-06-10, 10:12 PM by Sciborg_S_Patel. Edited 1 time in total.)
[-] The following 3 users Like Sciborg_S_Patel's post:
  • Brian, nbtruthman, Valmar
Scientists should use AI as a tool, not an oracle

Arvind Narayanan and Sayash Kapoor

Quote:Who produces AI hype? As we discuss in the AI Snake Oil book, it is not just companies and the media but also AI researchers. For example, a pair of widely-publicized papers in Nature in December 2023 claimed to have discovered over 2.2 million new materials using AI, and robotically synthesized 41 of them.

Unfortunately, the claims were quickly debunked: “Most of the [41] materials produced were misidentified, and the rest were already known”. As for the large dataset, examining a sample of 250 compounds showed that it was mostly junk.

A core selling point of machine learning is discovery without understanding, which is why errors are particularly common in machine-learning-based science. Three years ago, we compiled evidence revealing that an error called leakage — the machine learning version of teaching to the test — was pervasive, affecting hundreds of papers from 17 disciplines. Since then, we have been trying to understand the problem better and devise solutions.

This post presents an update. In short, we think things will get worse before they get better, although there are glimmers of hope on the horizon.
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • Laird, Brian
How South-East Asia's pig butchering scammers are using artificial intelligence technology

By Will Jackson for the ABC on 16 May, 2024

Quote:Mr Tana said the crime syndicates had made AI research and development a priority since "day one" and were willing to go to great lengths to get the most advanced technology.

He said some scam compounds in Myanmar were using advanced face-swapping tech.

"It's not everywhere, but it is in some of the larger ones for sure, and they're just always moving to increase and get better," he told the ABC.

Among the people he had helped was a computer engineer whose sole job was AI development for the syndicates, he said.

Mr Tana and his associated partners aided her after she managed to slip away, despite being accompanied by security guards, during a visit to a coffee shop in northern Myanmar.

"She said [their technology] was more advanced than anything she had seen in the world, anything she had ever studied," he said.
[-] The following 1 user Likes Laird's post:
  • Sciborg_S_Patel

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)