PEAR Lab micro-PK experiments

5 Replies, 879 Views

A couple of points that were new to me about the PEAR Lab micro-PK experiments using random event generators:

(1) In an interview with Jeffrey Mishlove, James Alcock claimed that the high-scoring participant responsible for nearly a quarter of the trials - without whom the overall database would become non-significant - was "the researcher herself who ran, conducted, designed the experiment" (presumably meaning Brenda Dunne).

(2) In an analysis published in 1986, John Palmer wrote (p. 116) that when the data were analysed by series (of which there were 61), for the baseline (i.e. control) runs, there were no Z values greater than 1.645 or less than -1.645. That would be remarkably unlikely to happen by chance, as we should expect 10% of the series to fall outside those limits, and the chances of none of the 61 series doing so would be about 0.0016, I reckon.

Palmer's reference is to Jahn, Nelson and Dunne, "Variance effects in REG series score distributions" (1985), paper presented at the Parapsychological Association convention.
[-] The following 2 users Like Guest's post:
  • Ninshub, Doug
This post has been deleted.
(2019-03-02, 07:47 PM)Chris Wrote: A couple of points that were new to me about the PEAR Lab micro-PK experiments using random event generators:

(1) In an interview with Jeffrey Mishlove, James Alcock claimed that the high-scoring participant responsible for nearly a quarter of the trials - without whom the overall database would become non-significant - was "the researcher herself who ran, conducted, designed the experiment" (presumably meaning Brenda Dunne).

(2) In an analysis published in 1986, John Palmer wrote (p. 116) that when the data were analysed by series (of which there were 61), for the baseline (i.e. control) runs, there were no Z values greater than 1.645 or less than -1.645. That would be remarkably unlikely to happen by chance, as we should expect 10% of the series to fall outside those limits, and the chances of none of the 61 series doing so would be about 0.0016, I reckon.

Palmer's reference is to Jahn, Nelson and Dunne, "Variance effects in REG series score distributions" (1985), paper presented at the Parapsychological Association convention.

Alcock's suggestion is that the smaller-than-expected variance for the control series would be consistent with some runs with high or low results having been re-designated post hoc as high-intention or low-intention runs.

But I'm having trouble reconciling that scenario with the fact that the overall statistical significance disappears if the 14 out of 61 series contributed by the high-scoring participant are removed from the database. That means that for the other 47 series the high- and low-intention runs aren't significant, but high and low Z values are still missing for the control runs. (The latter is still very unlikely to happen by chance, even when the number of series is reduced from 61 to 47 - I reckon the probability would be only 0.0071.)
(2019-03-03, 12:45 PM)Chris Wrote: Alcock's suggestion is that the smaller-than-expected variance for the control series would be consistent with some runs with high or low results having been re-designated post hoc as high-intention or low-intention runs.

Could it also be consistent with a psi effect? In other words, could the participants be influencing via PK the control runs to be constrained tightly to the mean, since that's what they understand is expected of a control run?
(2019-03-03, 02:18 PM)Laird Wrote: Could it also be consistent with a psi effect? In other words, could the participants be influencing via PK the control runs to be constrained tightly to the mean, since that's what they understand is expected of a control run?

Yes - that suggestion is mentioned in the interview with Alcock. I think it's the explanation that was adopted by the PEAR researchers.

But if that's the explanation, it seems odd that this effect is, in a sense, so much stronger than the effects they were actually trying for in the runs when the participants were attempting to influence the random event generator. And odd that it remains pretty strong when the high-scoring participant is excluded and the high/low-intention runs become non-significant.

However, as I say, it seems equally difficult to reconcile Alcock's idea that the results are consistent with fraud.
[-] The following 3 users Like Guest's post:
  • Desperado, Sciborg_S_Patel, Laird
(2019-03-02, 07:47 PM)Chris Wrote: A couple of points that were new to me about the PEAR Lab micro-PK experiments using random event generators:

(1) In an interview with Jeffrey Mishlove, James Alcock claimed that the high-scoring participant responsible for nearly a quarter of the trials - without whom the overall database would become non-significant - was "the researcher herself who ran, conducted, designed the experiment" (presumably meaning Brenda Dunne).

In this interview with Jeffrey Mishlove (at about 13:30), Brenda Dunne says:
"But it was interesting, because the males got these results in the direction they wanted, but their effects were very small. The females got much bigger effects, but they weren't correlated with their intention. Their highs would be huge, so would be their lows, and so would be their baselines. So they were getting this very large excursion ... but it was not correlated with their direction of intention. The males were getting a correlation with direction of intention, but the effects were very tiny."

Maybe she was excluding herself, but otherwise it seems an odd thing to say if she, a female, was single-handedly responsible for the significance of the database. Though I suppose the difference between two large and variable numbers can still be statistically significant.

[-] The following 1 user Likes Guest's post:
  • Ninshub

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)