Discussion of precognition in the journal Psychology of Consciousness

20 Replies, 3428 Views

To continue (on the basis that this is a useful exercise, at least for me).

Going on to the more specific criticisms of the papers by Mossbridge et al., made by Schwarzkopf in 2014, these are to be found briefly in a published paper and at greater length in a blog post:
https://www.frontiersin.org/articles/10....00332/full
https://figshare.com/articles/Why_presen...ed/1021480

Actually, these also include some other general arguments. For example, that precognition breaks the second law of thermodynamics, or the principles of theoretical physics. Or that statistics can demonstrate only a difference from the null hypothesis, not what the underlying process is. Or that if presentiment existed the entire body of evidence in neuroscience would have to be re-analysed. Obviously the first of these has been debated quite a lot, including by physicists (Schwarzkopf is a psychologist), and I don't think the others cast any doubt on the reality of the phenomenon in themselves.

On the presentiment meta-analysis, Schwarzkopf makes two points I broadly agree with:

(1) A meta-analysis is only as good as the primary studies it's based on. He criticises the quality of one in particular, by Bierman and Scholte, though apparently he slips up in suggesting that non-peer-reviewed conference proceedings were also included. (In their response, Mossbridge et al. say all the conference proceedings were peer-reviewed.) But obviously it's right that all meta-analyses are vulnerable to problems with the original studies (just as they are vulnerable to selective reporting).

(2) A potential problem with the presentiment studies is that the physiological variables measured just before each stimulus depend on all the previous stimuli, and this can lead to statistical artefacts (usually referred to as expectation bias in the literature). These effects have been estimated by modelling, but Schwarzkopf doesn't think this is satisfactory. He favours trying to measure the effects directly. (I agree that modelling is unsatisfactory, but I think attempts at direct measurement are too. I think what's needed is either to modify the experimental protocol, or to analyse the data in a way that eliminates the bias.)

Unfortunately most of Schwarzkopf's other points about the statistics don't seem well founded to me:

(1) Schwarzkopf suggests including data from conventional studies which may show the same effect. But the problem is that if such studies use a counterbalanced design, in which the nature of later stimuli is predictable from earlier ones, then that can worsen expectation bias. Schwarzkopf tries to dismiss these concerns, claiming that there is no problem unless subjects are aware of the counterbalanced design. 

But as Mossbridge et al. point out in their response, that's a fallacy. There will generally be a bias in the subjects' response based on past stimuli, even if they know nothing of the experimental design. If in addition the nature of the later stimuli depends on the earlier ones, rather than being independently randomised, then this dependence, combined with the bias in the subjects' response, can produce a spurious difference between calm and arousing trials.

(2) Schwarzkopf thinks that because there is generally a larger proportion of calm trials than arousing trials, subjects will come to expect calm trials, and will be right most of the time. So the nature of the stimuli will not really be unpredictable.

That's obviously a fallacy. The experiment doesn't measure whether subjects can predict the nature of the trial more than 50% of the time. It measures the difference between the subjects' response before arousing stimuli and the response in their absence. An overall expectation that all the trials are likelier to be calm than arousing clearly can't produce a significant difference of that kind.

(3) The effects measured could be artifacts of signal filtering during pre-processing. 

This possibility was discussed in the original meta-analysis, where it was noted that high-pass filtering could produce a pre-stimulus artifact, but that it would be in the opposite direction to that expected for presentiment. It's stated that only two of the studies included in the meta-analysis used high-pass filtering, and only one  of the filters would be vulnerable to such an artifact. It's not clear whether Schwarzkopf had any particular reason to think that filtering artifacts might be a problem in the other studies.

(4) It has been suggested that the problem of expectation bias could be eliminated by running only one trial per subject. Schwarzkopf suggests there would still be expectation bias, because of random differences between the subjects selected for the trials.

That's not what's normally meant by expectation bias in this context. Differences between subjects would have no tendency to produce significant differences between the responses to randomly selected calm and arousing stimuli, so there would be no bias in that sense.

(5) Schwarzkopf expresses concern that sometimes the baseline for measurements is established using a single point rather than a range of values, leading to greater variability of the data, and that sometimes a baseline is defined on a trial-by-trial basis, so that its position can be influenced by a residual response to the previous trial.

The objection to the method of fixing a single baseline for a whole session of trials is hard to understand, because typically what is analysed is the average difference between calm and arousing trials in each session, which will be independent of the baseline. Even if another analysis method were used in which the position of the baseline mattered - and if the variability of the data were increased - the effect of that would not be to produce a spurious different between calm and arousing trials, but to tend to obscure any genuine difference.

Certainly if trial-by-trial baselines were used, and the position of the baseline were influenced by the response to the preceding trial, that would be undesirable. But it would be only a particular example of the general dependence of the response to each trial on all the preceding trials (as in expectation bias). Regardless of how baselines are chosen, the effect of that dependence needs to be eliminated, whether by using an estimate of its size, by modifying the protocol to use single trials, or by using an appropriate analysis technique. (It seems that trial-by-trial baselines were used in only 2 of the 26 studies included in the meta-analysis, though for some reason Schwarzkopf describes these two as "many of these studies".)
(2018-04-25, 04:18 PM)Wormwood Wrote: I dont know how feasible it is to apply this methodology to something is broad-based and complicated as stock market prediction. The task would be incredible I would think, and difficult to decipher.  But maybe somebody is up to the challenge. Its a much simpler mental exercise to anticipate a simple picture or something of the sort.  Or is it?  I don't know?

Yes, I think it's a big leap to say that if people can score above chance in one of Bem's experiments with subliminal images, or show presentiment just before receiving a stimulus, then they should be able to predict the stock market or win at roulette. (And I don't think the hit rates in Bem's experiments are big enough to overcome the house advantage for roulette anyway.)

People have applied associative remote viewing to the financial markets, and have reported successful results. But I think it's fair to ask why parapsychology funding is a problem if this kind of thing can really be done sustainably:
http://psiphen.colorado.edu/Pubs/Smith14.pdf
[-] The following 1 user Likes Guest's post:
  • Wormwood
As the paper by Houran et al. didn't appear to be freely available online, for the sake of completeness I bit the bullet and bought a copy for $11.95. I'm afraid I don't think it was money well spent. 

As this is a 12-page paper, I'd hoped for something a bit more substantial than Schwarzkopf's 4-page one. But it breaks down roughly as follows:
1 and a bit pages: abstract and introduction
1 and a half pages: criticism regarding statistics and replication
1 and a half pages: arguments that precognition breaks the laws of physics and thermodynamics
2 and a half pages: discussion of research on intuition, with prominence given to work by the first two authors. (I don't understand how this is supposed to be relevant to precognition experiments)
1 and a half pages: discussion (including the suggestion that Mossbridge and Radin are displaying "delusional thinking")
3 and a bit pages: references.

There is very little here that is specific to the paper by Mossbridge and Radin to which it's supposed to be a response.

On statistics, there is the usual attack on the use of frequentist analysis.

The other main criticism runs as follows (pp. 99, 100):
"Also, it is no secret to most researchers that psychological experiments are inherently noisy and their results are potentially distorted by many factors, not all of which are random and not all of which can be controlled. Bluntly speaking, the tests and measurements community call this the crap factor to remind us that small effects, regardless of their statistical significance, are best interpreted as artifacts. For instance, should we really expect human behavior to be described precisely by theoretical coin flips when the behavior of real coins is known to deviate from theory? We suspect that Mossbridge and Radin might well agree, as they noted that small experimental effects could reflect ". . . consistent artifacts or methodological errors instead of a genuine effect" (p. 10). It is somewhat surprising that they advocate the use of meta-analysis, as well as propose or endorse interpretations of small experimental effects as viable examples of precognition, retrocausation, or paranormal presentiment."

Obviously this is very far from explaining how the results outlined by Mossbridge and Radin could actually have resulted from noise in the measurements, or how distortion could have arisen.  

I've copied below the phrase quoted from Mossbridge and Radin so that people can see it in its original context, and judge for themselves how fairly it has been used:
"A metaanalysis of such [forced choice precognition] experiments based on reports of 309 experiments published between 1935 and 1987 (Honorton & Ferrari, 1989) yielded a small overall effect size (Rosenthal ES = 0.02). Nevertheless, due to the high statistical power afforded by the many studies considered, it was statistically significant (Stouffer Z = 6.02, p < 1.1 x 10^-9). Using Rosenthal’s failsafe estimate, the authors calculated that 14,268 unreported studies averaging a null effect would have been required to transform the database into one with an overall null effect. The size of that file-drawer estimate, in comparison with the number of laboratories studying precognition, suggested that selective reporting was an unlikely explanation for the observed effect. However, small effect sizes may also reflect consistent artifacts or methodological errors instead of a genuine effect."
https://www.academia.edu/36395925/Precog...e_Evidence
[-] The following 1 user Likes Guest's post:
  • Oleo
(2018-04-25, 04:24 PM)Chris Wrote: (4) It has been suggested that the problem of expectation bias could be eliminated by running only one trial per subject. 

Mossbridge and Radin, in their review article, say that as far as they know this is "the only guaranteed way to rule out order or expectation effects as an explanation for presentiment". In the new meta-analysis update by Duggan and Tressoldi, they say that single-trial studies are "becoming more dominant in this research domain", but they refer explicitly only to two studies by Mossbridge in 2014 and 2015, one with significant results and the other without. (Of course, analysing only one trial per participant drastically reduces the statistical power of the study.)

Another proposal to eliminate bias was Kennedy's suggestion in 2013 that the analysis method should be turned on its head. Instead of dividing the measurements into two classes preceding the two kinds of stimulus, and comparing their averages, he suggested that a statistic should be defined based on the measurements, which would be capable of predicting which kind of stimulus was about to be applied:
https://jeksite.org/psi/jp13b.pdf

If feasible, that approach would eliminate bias, but it doesn't seem to have found favour with experimenters. The only such study referred to by Duggan and Tressoldi is this one from last year by Baumgart and others (including Schooler):
https://labs.psych.ucsb.edu/schooler/jon...982773.pdf

That is a conference presentation of preliminary data from just 8 participants in a study testing whether EEG measurements can predict whether a stimulus is going to be visual or auditory. Given the crude nature of the statistics used for prediction, the results seem promising - with typical success rates around 55% rather than 50% - but there is a big red flag in the form of the control sessions. When the electrodes were attached to a watermelon instead of a human head, some significant deviations from chance were still observed. It's difficult to see how that's possible unless there is some kind of error in the experimental protocol or the analysis.
Article in "Psychology Today".
The (Really) Astonishing Hypothesis: Looking into the Future

Quote:An impossible hypothesis is tested scientifically

Posted May 12, 2018

Quote:In 2011, an article by the social psychologist Daryl Bem caused a commotion in the science community. Daryl Bem showed in a series of psychological experiments with over 1000 participants that people on average were able to predict the outcome of future events that could otherwise not be anticipated by “normal” means on an above-chance level.
[-] The following 3 users Like Typoz's post:
  • Ninshub, tim, Max_B
(2018-04-25, 04:42 PM)Chris Wrote: People have applied associative remote viewing to the financial markets, and have reported successful results. But I think it's fair to ask why parapsychology funding is a problem if this kind of thing can really be done sustainably:
http://psiphen.colorado.edu/Pubs/Smith14.pdf

Vortex pointed out that all the issues of the Journal for Scientific Exploration are now available online. In the current issue is a paper entitled "An Ethnographical Assessment of Project Firefly: A Yearlong Endeavor to Create Wealth by Predicting FOREX Currency Moves with Associative Remote Viewing" by Debra Lynne Katz, Igor Grgic and T. W. Fendley.

From the abstract:
More than 60 remote viewers contributed 177 intuitive-based associative remote viewing (ARV) predictions over a 14-month period.
...
Investors, many of whom were also participants (viewers and judges), pooled investment funds totaling $56,300 with the stated goal of “creating wealth aggressively.” Rather than meeting that goal, however, most of the funds were lost over the course of the project.
Sad
This sort of thing rather reminds me of discussions and questions which I come across from time to time on the topic of reincarnation. One person or another will be asking, what sort of life must I lead (for example must I do good works) in order to ensure that in my next incarnation I will be rich and powerful? Essentially how to take the so-called laws of karma (if such exist - personally I think it a much-misunderstood subject) and bend them to our human will.

Attempts to generate wealth by predicting the stock market making use of various psi techniques seems to fall into a similar category. I'm always deeply doubtful about such projects. (I thought I'd heard of some such project which was successful. Maybe it was mentioned by Victor Zammit? Dean Radin? I'm not sure).
(This post was last modified: 2018-07-03, 04:21 AM by Typoz.)
(2018-07-03, 03:45 AM)Typoz Wrote: (I thought I'd heard of some such project which was successful. Maybe it was mentioned by Victor Zammit? Dean Radin? I'm not sure).

The Smith, Laham and Moddel (2014), linked to in my previous post, claimed success. They said the direction of change of the Dow Jones Industrial Average had been predicted correctly seven times out of seven, and that "a significant financial gain" had been made.
(2018-07-03, 03:45 AM)Typoz Wrote: Attempts to generate wealth by predicting the stock market making use of various psi techniques seems to fall into a similar category. I'm always deeply doubtful about such projects. (I thought I'd heard of some such project which was successful. Maybe it was mentioned by Victor Zammit? Dean Radin? I'm not sure).


Targ had a story about making money on silver futures.
https://www.wanttoknow.info/a-did-psychi...ver-market
[-] The following 3 users Like North's post:
  • Ninshub, Typoz, Doug
(2018-07-02, 11:02 PM)Chris Wrote: Vortex pointed out that all the issues of the Journal for Scientific Exploration are now available online. In the current issue is a paper entitled "An Ethnographical Assessment of Project Firefly: A Yearlong Endeavor to Create Wealth by Predicting FOREX Currency Moves with Associative Remote Viewing" by Debra Lynne Katz, Igor Grgic and T. W. Fendley.

From the abstract:
More than 60 remote viewers contributed 177 intuitive-based associative remote viewing (ARV) predictions over a 14-month period.
...
Investors, many of whom were also participants (viewers and judges), pooled investment funds totaling $56,300 with the stated goal of “creating wealth aggressively.” Rather than meeting that goal, however, most of the funds were lost over the course of the project.
Sad


Courtesy of the SPR Facebook page - on a similar subject, here's a paper by Dick Bierman entitled "Can Psi Research Sponsor Itself? Simulations and results of an automatedARV-casino experiment.":
https://www.academia.edu/16693484/Can_ps...experiment

Apparently this was cowritten with Thomas Rabeyron and - from other sources - presented at the 2013 Parapsychology Convention. The simulations indicate that it should work, but as they're only simulations I suppose the question in the title has to be classified under "Questions to which the answer is maybe."

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)