I noticed, looking over the recent dream-esp meta-analysis, that there was an interesting experiment by Watt in which a decision had to be made whether to include or exclude some additional results. Basically, under the conditions of the study, participants were given feedback after each trial as to how they were doing, and some of them dropped out after receiving that feedback. The concern was that instead of dropping out randomly (where the results in the drop out group would not be biased with respect to the main group), the subjects were dropping out because their results were either too good (fear of psi) or too poor (discouraged by early failure). When Watt looked at the results from the drop out group, it was discovered that those results were indeed substantially and significantly worse than those who continued, leading to a selection bias which would artificially inflate the significance and effect size in the remainder.
https://journals.ub.uni-heidelberg.de/in.../34888/pdf
http://www.research.ed.ac.uk/portal/file...ersion.pdf
https://koestlerunit.files.wordpress.com...jpnote.pdf
Lance et. al. elected not to include those results, of course. How do you think this should have been handled? What if it had been the other way around and the "fear of psi" idea was confirmed - i.e. the original study had not shown a significant effect, but the drop outs had excessively good results which led to a substantial and significant effect when their data was added in?
Linda
https://journals.ub.uni-heidelberg.de/in.../34888/pdf
http://www.research.ed.ac.uk/portal/file...ersion.pdf
https://koestlerunit.files.wordpress.com...jpnote.pdf
Lance et. al. elected not to include those results, of course. How do you think this should have been handled? What if it had been the other way around and the "fear of psi" idea was confirmed - i.e. the original study had not shown a significant effect, but the drop outs had excessively good results which led to a substantial and significant effect when their data was added in?
Linda