Discussion of precognition in the journal Psychology of Consciousness

20 Replies, 3453 Views

(I originally posted this to the "Text resources" thread, thinking that the papers weren't available online. As most of them are, it probably deserves its own thread.)

Courtesy of the SPR Facebook page, here's a blog post by Carlos S. Alvarado, describing a discussion of precognition in a recent issue of the journal Psychology of Consciousness: Theory, Research, and PracticeThe articles don't seem to be freely available online, but Alvarado quotes the abstracts:
https://carlossalvarado.wordpress.com/2018/04/19/precognition-discussed-in-a-psychology-journal/

After an editorial, there is an article by Jonathan W. Schooler, Stephen Baumgart, and Michael Franklin making the case for the scientific investigation of anomalous cognition. Then there is a comprehensive review of the evidence by Julia Mossbridge and Dean Radin, then two invited criticisms of the review, by D. Samuel Schwarzkopf and by James Houran, Rense Lange, and Dan Hooper (apparently dealing mainly with plausibility in the light of theoretical physics and statistical analysis), and finally a response by Mossbridge and Radin. 

I quite liked the thrust of the abstract by Schooler et al:
They distinguish between the criteria that justify entertaining the possibility of anomalous cognition from those required to endorse it as a bona fide phenomenon. ... we provide arguments for why researchers should consider adopting a liberal criterion for entertaining anomalous cognition while maintaining a very strict criterion for the outright endorsement of its existence. Grounded in an understanding of the justifiability of disparate views on the topic, the authors encourage humility on both the part of those who present evidence in support of anomalous cognition and those who dispute the merit of its investigation.

.....................................................................................................

Actually, four of the six articles are freely available online in one form or another.

Schooler et al:
https://labs.psych.ucsb.edu/schooler/jon...241js3.pdf

Mossbridge and Radin review:
https://www.academia.edu/36395925/Precog...e_Evidence

Schwarzkopf:
https://sampendu.files.wordpress.com/201...eradin.pdf

Mossbridge and Radin response:
https://www.academia.edu/36395926/Plausi...ion_Review
(This post was last modified: 2018-04-21, 12:08 PM by Laird. Edit Reason: Fixed the PDF links (rogue [color] tags had broken them). )
[-] The following 5 users Like Guest's post:
  • Brian, Sciborg_S_Patel, Wormwood, Ninshub, Typoz
Schwarzkopf's paper is very brief and indicates that he feels the more substantial criticisms he made in 2014 of the meta-analysis and subsequent discussion of presentiment experiments by Mossbridge et al. still stand. So it would be fair to take account of that earlier exchange.

The meta-analysis by Mossbridge, Tressoldi and Utts is here:
https://escholarship.org/uc/item/22b0b1js

The subsequent discussion paper by Mossbridge et al. is here:
https://pdfs.semanticscholar.org/631a/75...1524300259

The paper containing Schwarzkopf's criticism is here:
https://www.frontiersin.org/articles/10....00332/full

A follow-up blog post by Schwarzkopf  is here:
https://figshare.com/articles/Why_presen...ed/1021480

A response by Mossbridge et al. is here:
https://arxiv.org/ftp/arxiv/papers/1501/1501.03179.pdf
When looking for the online versions of those papers, I came across this update of the meta-analysis, which was posted online only four weeks ago:

Michael Duggan and Patrizio E. Tressoldi
Predictive Physiological Anticipation Preceding Seemingly Unpredictable Stimuli: An Update of Mossbridge's et al. Meta-Analysis
https://papers.ssrn.com/sol3/papers.cfm?...id=3097702

The abstract reads:

Background: This is an update of the Mossbridge et al’s meta-analysis related to the physiological anticipation preceding seemingly unpredictable stimuli. The overall effect size observed was 0.21; 95% Confidence Intervals: 0.13 - 0.29

Methods: Eighteen new peer and non-peer reviewed studies completed from from January 2008 to October 2017 were retrieved describing a total of 26 experiments and 34 associated effect sizes. 

Results: The overall weighted effect size, estimated with a frequentist multilevel random model, was: 0.29; 95% Confidence Intervals: 0.19-0.38; the overall weighted effect size, estimated with a multilevel Bayesian model, was: 0.29; 95% Credible Intervals: 0.18-0.39.

Effect sizes of peer reviewed studies were slightly higher: 0.38; Confidence Intervals: 0.27-0.48 than non-peer reviewed articles: 0.22; Confidence Intervals: 0.05-0.39.

The statistical estimation of the publication bias by using the Copas model suggest that the main findings are not contaminated by publication bias. 

Conclusions: In summary, with this update, the main findings reported in Mossbridge et al’s meta-analysis, are confirmed.
[-] The following 2 users Like Guest's post:
  • Wormwood, malf
Thanks for all this, Chris. It looks worth digging into. Just so you know, I edited your (opening) post because the links to the PDFs had been corrupted by some rogue tags - they seem to be working fine now.
I'll gradually post a few comments on the papers here.

The paper by Schooler et al. puts the case for anomalous cognition to be viewed as a legitimate field of enquiry by those who wish to pursue it, but argues that extraordinarily strong evidence will be required before its existence can be accepted. The authors categorise themselves as "[viewing] anomalous cognition as unlikely, [but] also [appreciating] its profound significance were it true." From that point of view most of it seems fairly reasonable.

Towards the end they give a list of criteria for the scientific acceptance of anomalous cognition:

1. Careful evaluation of design by skeptics and supporters prior to the initiation of the protocol;
2. Preregistration of protocol including data analysis using both standard and Bayesian procedures;
3. A computer implemented procedure using locked code that cannot be tampered with;
4. A procedure that can be carried out by participants without interaction with experimenters as it takes place;
5. Off-site logging of data;
6. Careful independent analysis of data by multiple statisticians’ blind to condition;
7. Analysis of data must reveal highly significant results when analyzed using both standard and Bayesian procedures;
8. The resulting protocol must itself be replicated by numerous independent laboratories; and
9. Ideally the protocol should be transformed into a paradigm that can have demonstrable real world outcomes for example, predicting stock market.

Again, it mostly seems reasonable enough, but in (2) and (7) I don't think the emphasis on Bayesian analysis has been thought out. If it includes the prior probability of the existence of psi, then that is something that's impossible to determine objectively, and something that sceptics and proponents will never agree on. Even if that prior probability is left out of the analysis, there are still going to be severe difficulties with the Bayesian approach if psi is other than a neat, well behaved phenomenon - for example, if it exhibits experimenter effects, decline effects and so on. In that case Bayesian methods need a model of how psi behaves statistically, which could be very hard to develop. Traditional (frequentist) statistics don't require that - they require only a knowledge of the statistics if there's no psi. The criteria should allow the existence of psi to be demonstrated even if it's an untidy, badly behaved phenomenon whose statistics we don't understand.

Equally, I find criterion number (9) baffling. Why should a "real world outcome" be required as part of a scientific research programme? It's particularly baffling because some people have theorised that there may be some factor that limits or blocks "real world outcomes" of psi.
[-] The following 3 users Like Guest's post:
  • berkelon, Laird, malf
(2018-04-21, 10:43 PM)Chris Wrote: 9. Ideally the protocol should be transformed into a paradigm that can have demonstrable real world outcomes for example, predicting stock markets. 

Equally, I find criterion number (9) baffling. Why should a "real world outcome" be required as part of a scientific research programme? It's particularly baffling because some people have theorised that there may be some factor that limits or blocks "real world outcomes" of psi.

it does only say ‘ideally’. In terms of convincing the wider population, that would be ideal. It doesn’t appear to be a study killer though.
(This post was last modified: 2018-04-21, 11:22 PM by malf.)
[-] The following 1 user Likes malf's post:
  • Laird
(2018-04-21, 11:21 PM)malf Wrote: it does only say ‘ideally’. In terms of convincing the wider population, that would be ideal. It doesn’t appear to be a study killer though.

These are meant to be criteria for convincing the scientific world, though. 

If criteria 1-8 were satisfied, and bullet-proof evidence of a replicable phenomenon based on proper randomisation had been produced, I just don't understand what benefit there would be in transforming it into - for example - a prediction of the stock market, which of course is not a random system and whose movements it's quite believable that people can predict to some extent.
[-] The following 2 users Like Guest's post:
  • Laird, Valmar
Thinking about it a bit more, while the paper may be reasonable from the perspective of "scientists [who] view anomalous cognition as unlikely, [but who] also appreciate its profound significance were it true", I wonder how reasonable that viewpoint is for people who are familiar with the evidence. 

I don't so much mean the meta-analyses, because (unless the ideas of Watt and Kennedy on pre-registering not only studies but also meta-analyses are adopted) those are always going to be vulnerable to suggestions of bias of one kind or another. But there are individual studies or programmes - such as Bem's original ones and the Global Consciousness Project - that have produced extremely strong statistical evidence. I think that in order to view anomalous cognition as unlikely, one needs a plausible conventional explanation of that evidence. 

The only plausible conventional explanation I've seen is that the people who performed those studies are lying about what they did. I don't think Schooler et al. believe that.
[-] The following 2 users Like Guest's post:
  • Laird, Valmar
I thought it might be interesting to summarise Schwarzkopf's arguments, but his 2018 contribution has defeated me. Unfortunately, the more closely I looked at what he said, the less logical it seemed, and the less able I felt to produce a summary of his argument that made sense. 

He says at the start that he is going to deal only with the plausibility of precognition, having made other criticisms of the presentiment work previously. What it really boils down to is that he doesn't believe it's possible, and nothing will convince him that it is.

Apparently he does feel a need to try to back this opinion up with some arguments, but what he produces is the kind of thing copied below, which seems very incoherent and self-contradictory to me (I've added some emphasis to indicate the parts that seem particularly inconsistent):

M&R’s argument is known as the base rate fallacy: No matter how strong the statistical
evidence, if the hypothesis is impossible, it must necessarily be false. The p-value is
irrelevant when the observed effect size cannot be observed under the alternative
hypothesis. I cannot confidently claim that precognition or presentiment are impossible. I
simply do not know enough about the universe to know this for certain. I am however
extremely skeptical that such retro-causal effects exist. Critically, even if I accept that such
effects are at least possible, the rate at which they can be observed in noisy psychology or
physiology experiments must be nanoscopic, many orders of magnitude below those
reported by these studies. The reported effects are not plausible under this hypothesis and
thus alternative explanations are far more likely.

Therefore, I must disagree with M&R that we are dealing here with “scientific heresies of
the first order.” Rather this statement betrays a fundamental misunderstanding: there are
no heresies in science. Dogma is antithetical to science and any assumption can be
challenged. Critically, however, nobody should take you seriously without compelling
evidence. Frank may very well be a wizard but unless you show me more conclusive
evidence that wizards actually exist I remain doubtful. I am skeptical that precognition is
even possible but I certainly will not be convinced of its existence by some implausible
observations, no matter how significant the meta-analysis.

In particular, I really don't understand the sentence that begins "Critically ...". I don't know what Schwarzkopf means by "the rate at which they can be observed in noisy psychology or physiology experiments must be nanoscopic". I guess he really means that he thinks that if precognition existed it would be too weak to observe in a noisy experiment (see below).

Anyhow, apart from this kind of general assertion, there are some more specific points about plausibility:

(1) Different physiological processes have different timescales and for each, the timescale of the alleged presentiment effect is similar to the normal timescale of the process. Schwarzkopf finds this implausible. Although he doesn't explain his objection very clearly, I think I can see what he's driving at.

(2) If precognition operated with the success rate found in Daryl Bem's experiments, people would be able to make large amounts of money by betting.

(3) Even if we accept ideas about quantum entanglement "or other subatomic time-reversals" as an explanation for precognition, the effects should be tiny - again, much smaller than those seen by Bem.
I dont know how feasible it is to apply this methodology to something is broad-based and complicated as stock market prediction. The task would be incredible I would think, and difficult to decipher.  But maybe somebody is up to the challenge. Its a much simpler mental exercise to anticipate a simple picture or something of the sort.  Or is it?  I don't know?

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)