AI is wrestling with a replication crisis

6 Replies, 869 Views

AI is wrestling with a replication crisis

Will Douglas Heaven

Quote:Last month Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier this year. Google was describing successful trials of an AI that looked for signs of breast cancer in medical images. But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech.

“We couldn’t take it anymore,” says Benjamin Haibe-Kains, the lead author of the response, who studies computational genomics at the University of Toronto. “It's not about this study in particular—it’s a trend we've been witnessing for multiple years now that has started to really bother us.”

Haibe-Kains and his colleagues are among a growing number of scientists pushing back against a perceived lack of transparency in AI research. “When we saw that paper from Google, we realized that it was yet another example of a very high-profile journal publishing a very exciting study that has nothing to do with science,” he says. “It's more an advertisement for cool technology. We can’t really do anything with it.”
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • tim
(2021-03-04, 05:51 PM)Sciborg_S_Patel Wrote: AI is wrestling with a replication crisis

Will Douglas Heaven

I expect no transparency on this type of tech.  I'm not even sure it is up to Google (and others) as while I'm the farthest thing from a conspiracist and can see government wrapping themselves way up in this stuff under the banners of sovereign security.  (And perhaps rightly so.)
[-] The following 1 user Likes Silence's post:
  • Sciborg_S_Patel
(2021-03-04, 06:34 PM)Silence Wrote: I expect no transparency on this type of tech.  I'm not even sure it is up to Google (and others) as while I'm the farthest thing from a conspiracist and can see government wrapping themselves way up in this stuff under the banners of sovereign security.  (And perhaps rightly so.)

I also suspect more and more the results will just be what the machine learning programs spit out, divorced from easily traceable methodology. So not much for a human to write about in a paper.

Maybe not so bad for image recognition of cancer, but the policy AIs will be more disturbing.

'At the highest levels of authority, we will probably retain human figureheads, who will give us the illusion that the algorithms are only advisers and that ultimate authority is still in human hands.'
  - Yuval Harari

"Your search for God is over. God is in the Machine."
  - Grant Morrison, Invisibles: Kissing Mr. Quimper
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


(2021-03-04, 07:14 PM)Sciborg_S_Patel Wrote: I also suspect more and more the results will just be what the machine learning programs spit out, divorced from easily traceable methodology. So not much for a human to write about in a paper.

Maybe not so bad for image recognition of cancer, but the policy AIs will be more disturbing.

'At the highest levels of authority, we will probably retain human figureheads, who will give us the illusion that the algorithms are only advisers and that ultimate authority is still in human hands.'
  - Yuval Harari

"Your search for God is over. God is in the Machine."
  - Grant Morrison, Invisibles: Kissing Mr. Quimper

I may be naive, but this supposed "magic" of machine learning is something I ain't buying.  I get the general concept of letting code do complex things that aren't "simply reproducible" so to speak, but if we can code something to do that we can also code it to explain and document its "work"; its "rationale" for whatever advice/guidance it produces.

I guess its conceivable to imagine a machine "intelligence" that's beyond our own (i.e., it can't explain itself to us), but that seems fantastical as well.
[-] The following 3 users Like Silence's post:
  • nbtruthman, Obiwan, Sciborg_S_Patel
(2021-03-04, 09:05 PM)Silence Wrote: I may be naive, but this supposed "magic" of machine learning is something I ain't buying.  I get the general concept of letting code do complex things that aren't "simply reproducible" so to speak, but if we can code something to do that we can also code it to explain and document its "work"; its "rationale" for whatever advice/guidance it produces.

I guess its conceivable to imagine a machine "intelligence" that's beyond our own (i.e., it can't explain itself to us), but that seems fantastical as well.

Oh it's not that the machine is intelligent, it's that the amount of data that goes into the "curve-fitting" doesn't give us a necessarily coherent algorithm.

No one is in the machine learning anything, so we won't even be slaves to synthetic minds...rather just to machines churning along with no one really inside them...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 2 users Like Sciborg_S_Patel's post:
  • sgetaz, nbtruthman
(2021-03-04, 09:24 PM)Sciborg_S_Patel Wrote: Oh it's not that the machine is intelligent, it's that the amount of data that goes into the "curve-fitting" doesn't give us a necessarily coherent algorithm.

No one is in the machine learning anything, so we won't even be slaves to synthetic minds...rather just to machines churning along with no one really inside them...

Yup, I get that.  My point was that should the tech evolve to the point where humans are "asking" the machine for its analysis of something, we should be able to have "coded" the machine in such a way as for it to communicate at some digestible level how it came to its conclusion(s).  A straight black box soothsaying machine ain't gonna be trusted by anybody. Wink
[-] The following 2 users Like Silence's post:
  • Sciborg_S_Patel, Typoz
(2021-03-05, 04:28 PM)Silence Wrote: A straight black box soothsaying machine ain't gonna be trusted by anybody. Wink

You'd hope so...
'Historically, we may regard materialism as a system of dogma set up to combat orthodox dogma...Accordingly we find that, as ancient orthodoxies disintegrate, materialism more and more gives way to scepticism.'

- Bertrand Russell


[-] The following 1 user Likes Sciborg_S_Patel's post:
  • Silence

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)