Results conundrum

14 Replies, 2738 Views

I noticed, looking over the recent dream-esp meta-analysis, that there was an interesting experiment by Watt in which a decision had to be made whether to include or exclude some additional results. Basically, under the conditions of the study, participants were given feedback after each trial as to how they were doing, and some of them dropped out after receiving that feedback. The concern was that instead of dropping out randomly (where the results in the drop out group would not be biased with respect to the main group), the subjects were dropping out because their results were either too good (fear of psi) or too poor (discouraged by early failure). When Watt looked at the results from the drop out group, it was discovered that those results were indeed substantially and significantly worse than those who continued, leading to a selection bias which would artificially inflate the significance and effect size in the remainder.

https://journals.ub.uni-heidelberg.de/in.../34888/pdf
http://www.research.ed.ac.uk/portal/file...ersion.pdf
https://koestlerunit.files.wordpress.com...jpnote.pdf

Lance et. al. elected not to include those results, of course. How do you think this should have been handled? What if it had been the other way around and the "fear of psi" idea was confirmed - i.e. the original study had not shown a significant effect, but the drop outs had excessively good results which led to a substantial and significant effect when their data was added in?

Linda
I think that Lance et al. were right to exclude these results. No matter which way the unreported results had gone (either “early disappointment” or “fear of psi”) if a subject is allowed to stop participating because they don’t like their results, it would mean that the experiment is now no longer just measuring psi ability but also people’s reaction to their results.

This flaw is pretty rare in parapsychology, but it happens occasionally. One of the early ganzfeld experiments (Terry, 1975, A Multiple Session Ganzfeld Study) had this issue, but it is still lurking in the depths of most ganzfeld meta-analyses.
(2017-12-02, 07:20 PM)ersby Wrote: I think that Lance et al. were right to exclude these results. No matter which way the unreported results had gone (either “early disappointment” or “fear of psi”) if a subject is allowed to stop participating because they don’t like their results, it would mean that the experiment is now no longer just measuring psi ability but also people’s reaction to their results.

This flaw is pretty rare in parapsychology, but it happens occasionally. One of the early ganzfeld experiments (Terry, 1975, A Multiple Session Ganzfeld Study) had this issue, but it is still lurking in the depths of most ganzfeld meta-analyses.

I'm sorry, I wasn't clear. Lance et. al. did include the original (now known to be biased) study, calling it an "impressive precognition study". They excluded the results from the follow-up report which obviated those "impressive" results.

Linda
Ah. I still haven't got round to reading that meta analysis yet. 

In that case, they made a mistake. It should not be included.
I'm afraid it was a poorly designed experiment, and the results are potentially biased one way or the other, whether the data from the drop-outs are included or not. The bias owing to drop-outs could have been eliminated by prespecifying the total number of trials, and including data from subjects who contributed less than four. As it is, the only really safe thing to do would be to eliminate the whole study from the meta-analysis.

It's a bit worrying that Caroline Watt fell into this trap, considering her role in evaluating the design of experiments submitted to the KPU Registry.
This reminds me of the RetroPsychoKinesis Experiment:
http://www.fourmilab.ch/rpkp/experiments/summary/

Among the data displayed are the hit rates for people who did different numbers of experiments. Overall the results aren't statistically significant, but the results for people who did just one experiment were phenomenally successful (Z = 4.7). It seems someone has been having fun and games.

[Edit: Sorry, that should read phenomenally unsuccessful.]
Why would you give any feedback at all?

~~ Paul
If the existence of a thing is indistinguishable from its nonexistence, we say that thing does not exist. ---Yahzi
(2017-12-03, 12:39 AM)Paul C. Anagnostopoulos Wrote: Why would you give any feedback at all?

~~ Paul

It was a precognition study, to see if they would dream about a video they would eventually view. So eventually they had to view the video, at which point they would realize their dreams had or hadn't matched. 

Linda
(2017-12-03, 12:39 AM)Paul C. Anagnostopoulos Wrote: Why would you give any feedback at all?

It's maybe worth clarifying that the hit rate was based on the assessment of independent judges, so as far as I can see the "feedback" to the subject was just the target video, not the judge's verdict on whether it was a match (although the judge's ratings were done before the subject was sent the target, for security reasons).
(2017-12-02, 08:19 PM)fls Wrote: I'm sorry, I wasn't clear. Lance et. al. did include the original (now known to be biased) study, calling it an "impressive precognition study". They excluded the results from the follow-up report which obviated those "impressive" results.

It's also worth noting that the inclusion of the results originally omitted by Watt didn't entirely cancel out her initial result. It reduced the hit rate from 32% to 30.6%. On the pre-planned statistical test the results remained significant at p=0.04, though Watt (as in the original paper) also presented some post hoc considerations casting doubt on the precognition interpretation. Evidently post hoc analysis is something everyone finds it difficult to resist!

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)