Dean Radin preprint on "Tricking the Trickster"

15 Replies, 2635 Views

(2018-08-24, 08:04 AM)Chris Wrote: On the assumption that the result of Radin's analysis is a real effect, it's true that using this statistic would add about an extra 25% of random data. As the overall Z value he found is 10.6, I can't believe that would wipe out a genuine effect.

I was going to calculate some figures on that basis, but then I noticed something a bit strange. 

The results section says that the total number of trials was nearly 101 million, and the total numbers of pairs of trials was over 96 million - a difference of about 5 million, or about 1 per session, given that the commonest number of trials per session was 20.

But the difference should be about 5 per session, as the trials from each session were divided into 5 sequences before forming the pairs. (In the illustration in figure 3, which had 3 options rather than 5, there were 14 trials and 4+5+2=11 pairs of trials - a difference of 3.)

That suggests the trials may not in fact have been divided into 5 sequences, as described in the Methods section. Or at any rate something doesn't seem consistent here.
(2018-08-24, 09:31 PM)Chris Wrote: The results section says that the total number of trials was nearly 101 million, and the total numbers of pairs of trials was over 96 million - a difference of about 5 million, or about 1 per session, given that the commonest number of trials per session was 20.

But the difference should be about 5 per session, as the trials from each session were divided into 5 sequences before forming the pairs. (In the illustration in figure 3, which had 3 options rather than 5, there were 14 trials and 4+5+2=11 pairs of trials - a difference of 3.)

That's an interesting observation. Available trial pairs for each session (or run) should consist of the total number of trials minus the number of different responses logged. It would seem that far fewer trials were deducted from the original total than would be expected.

Quote:That suggests the trials may not in fact have been divided into 5 sequences, as described in the Methods section. Or at any rate something doesn't seem consistent here.

Agreed. It looks like Dean has some serious splainin' to do.
(2018-08-24, 09:31 PM)Chris Wrote: I was going to calculate some figures on that basis, but then I noticed something a bit strange. 

The results section says that the total number of trials was nearly 101 million, and the total numbers of pairs of trials was over 96 million - a difference of about 5 million, or about 1 per session, given that the commonest number of trials per session was 20.

But the difference should be about 5 per session, as the trials from each session were divided into 5 sequences before forming the pairs. (In the illustration in figure 3, which had 3 options rather than 5, there were 14 trials and 4+5+2=11 pairs of trials - a difference of 3.)

That suggests the trials may not in fact have been divided into 5 sequences, as described in the Methods section. Or at any rate something doesn't seem consistent here.

Have you e-mailed him?
(2018-08-25, 08:58 AM)Roberta Wrote: Have you e-mailed him?

I plan to contact him (I don't think he "does" email, but there is a contact form on his website), but first I'd like to prepare some proper equations describing the bias issue, and also to mull over them to make sure there aren't any other errors lurking.

If anyone else here had any other thoughts that would be good.

I think that on the whole the various analyses in the preprint are consistent with the optional stopping explanation, but there were two things I didn't understand:

(1) On page 8, Radin looked for correlations between his sequential statistic and non-uniformities in the matrices representing transitions from target to target. Mostly he didn't find any, but in one case he did. On days when the transitions between each target and the next-but-one target happened to be non-uniform, then the sequential statistic was significantly smaller (p = 0.009).

(2) On page 10, Radin tried shuffling the data in various ways to see what happened to the sequential statistic. When he performed a circular shift on the targets for each participant, so that guess number n was compared with target number n+1, the sequential statistic became significantly negative (though only by -3 standard deviations, compared with 10 standard deviations for the unshifted data).
Contacting Dean Radin has been on my "To Do" list for 15 months, but a rather longer version of this work has now been published in the Journal of Scientific Exploration under the title "Tricking the Trickster: Evidence for Predicted Sequential Structure in a 19-Year Online Psi Experiment."
https://www.scientificexploration.org/jo...sue-4-2019
[-] The following 2 users Like Guest's post:
  • laborde, Typoz
(2019-12-19, 11:39 AM)Chris Wrote: Contacting Dean Radin has been on my "To Do" list for 15 months, but a rather longer version of this work has now been published in the Journal of Scientific Exploration under the title "Tricking the Trickster: Evidence for Predicted Sequential Structure in a 19-Year Online Psi Experiment."
https://www.scientificexploration.org/jo...sue-4-2019

The new version isn't hugely different.

The various statistical tests intended to check whether the effect could be due to optional stopping are replaced by a simpler one, but I don't think it will necessarily capture such an artefact (at least not if people are more likely to stop immediately after several misses in succession, rather than basing their decision on the overall average hit rate).

One new finding that at first sight looks inconsistent with an optional stopping artefact is that if a similar analysis is performed, but without first dividing the trials into five separate series according to the position of the participant's guess, then the results become non-significant for one of the two experiments, and also non-significant for both experiments combined. But if it is an artefact, it results from the omission of pairs of trials spanning two different sessions. And if the trials aren't divided into five series, then the number of such pairs is reduced by a factor of five. So the size of the artefact would be expected to drop to potentially only 20% of its previous value. I think the results of the new analysis are broadly consistent with that. (For some reason the effect is about twice as big in one experiment - the "Quick Remote Viewing" one - as in the other - the Card Guessing one.)

I still don't understand why the total number of pairs of trials is roughly 5% smaller than the total number of trials, rather than about 25% smaller.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)