Meta-analysis of scientific evidence about information provided by mediums

1 Replies, 660 Views

A preprint entitled "Anomalous information reception by mediums: A meta-analysis of the scientific evidence" has been uploaded by ResearchGate by Patrizio Tressoldi. It includes studies published from 2001 onwards. Information about the authors has been removed so that future reviewers can be blind to its authorship. The authors would like as much feedback as possible:
https://www.researchgate.net/publication..._evidence/

Here is the abstract:
"Background and purpose: Mediumship is the ostensible phenomenon of human-mediated communication between deceased and living persons. In this paper, we perform a meta-analysis of all available modern experimental evidence up to December 2019 investigating the accuracy of apparently anomalous information provided by mediums about deceased individuals.
Methods: Fourteen papers passed our selection criteria, for a total of 18 experiments. Both Bayesian and frequentist random effects models were used to estimate the aggregate effect size across studies.
Results: The overall standardized effect size (proportion index), estimated both with a frequentist and a Bayesian random effects model, yielded a value of .18 (95% C.I. = .12 - .25), above the chance level. Furthermore, these estimates passed the control of two publication bias tests.
Conclusions: The results of this meta-analysis support the hypothesis that some mediums can retrieve information about deceased persons through unknown means."
[-] The following 2 users Like Guest's post:
  • Ninshub, berkelon
I found one of the references quite interesting:
Evan Carter, Felix Schönbrodt, Will Gervais and Joseph Hilgard
Correcting for bias in psychology: A comparison of meta-analytic methods

This was published in Advances in Methods and Practices in Psychological Science, 2 (2019).
doi:10.1177/2515245919847196

But there is a preprint here, of which they say all results and conclusions are identical to the final published version:
https://psyarxiv.com/9h3nu

What they did was to simulate a large number of experimental results with parameters appropriate to psychology, including models of effect-size heterogeneity, publication bias and questionable research practices. They then applied a number of meta-analysis methods, and investigated the false-positive rates, the power and the accuracy of effect-size estimates and confidence intervals in a variety of situations.

I don't know a lot about meta-analysis, but my impression is that some of these methods are more sophisticated than the ones that are commonly used in parapsychology meta-analyses. I'm surprised by the small false-positive rates some of them achieve, coupled with reasonably high power, despite the fact that the models incorporate publication bias and QRPs. Of course the results are limited to particular models of these factors, but even so they seem quite impressive.
[-] The following 2 users Like Guest's post:
  • Obiwan, Ninshub

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)