The recent discussion over whether Bem's "Feeling the Future" deserves retraction, plus claims from parapsychologists about the quality of their work in comparison to psychology, seems to set the bar too low for parapsychology. There is no question that the practices which psychologists engage in (testing multiple hypotheses and selectively reporting on those which "work", HARKing (hypothesizing after results are known), leaving data in the file drawer, optional starting and stopping, selective reporting, flexibility in outcomes, etc.) results in a plethora of false results, and seemingly in entire false fields (the priming research is now under substantial suspicion). Since parapsychologists have been found to engage in these practices, as well, their findings are under the same suspicion.
I don't think that merely finding that parapsychology has been slightly better than psychology in this regard (if that is indeed the case - I'm curious as to whether Bem's shenanigans (passing exploratory research off as settled research, failing to mention a file-drawer, etc.) will be seen as unacceptable by former standards) is good enough. Even the corrections currently being made by psychology are insufficient to actually address the bulk of the problems. For example, the request for multi-study articles (as a way to ensure an effect is valid) may have the effect of increasing the production of false results (https://replicationindex.wordpress.com/2...gic-index/).
If parapsychology wants to be taken seriously, I think they need to aim a lot higher than setting themselves up as "better than psychology". Medicine is already way ahead of them in this regard, and they would do well to attempt to bring themselves up to the standard of good research practices.
Examples:
Pre-registration of all studies and a refusal from all major journals to publish unregistered studies. (http://www.nejm.org/doi/full/10.1056/NEJMe048225)
The use of research methods at low risk of bias.
(http://cobe.paginas.ufsc.br/files/2014/1...e.RCT_.pdf)
Address the level of quality of the performed studies (e.g. for parapsychology - start performing direct comparisons, rather than indirect comparisons based on "chance").
(http://ktdrr.org/products/update/v1n5/di...tev1n5.pdf)
I've watched Kennedy (who is also a medical doctor) make these kinds of suggestions. But there doesn't seem to be much momentum for this, yet. The improvements made so far have been trivial with respect reducing the production of false results. For example, the idea that journals are willing to publish negative studies doesn't help much if the researchers are using methods which grossly inflate their ability to produce "positive" studies.
Linda
I don't think that merely finding that parapsychology has been slightly better than psychology in this regard (if that is indeed the case - I'm curious as to whether Bem's shenanigans (passing exploratory research off as settled research, failing to mention a file-drawer, etc.) will be seen as unacceptable by former standards) is good enough. Even the corrections currently being made by psychology are insufficient to actually address the bulk of the problems. For example, the request for multi-study articles (as a way to ensure an effect is valid) may have the effect of increasing the production of false results (https://replicationindex.wordpress.com/2...gic-index/).
If parapsychology wants to be taken seriously, I think they need to aim a lot higher than setting themselves up as "better than psychology". Medicine is already way ahead of them in this regard, and they would do well to attempt to bring themselves up to the standard of good research practices.
Examples:
Pre-registration of all studies and a refusal from all major journals to publish unregistered studies. (http://www.nejm.org/doi/full/10.1056/NEJMe048225)
The use of research methods at low risk of bias.
(http://cobe.paginas.ufsc.br/files/2014/1...e.RCT_.pdf)
Address the level of quality of the performed studies (e.g. for parapsychology - start performing direct comparisons, rather than indirect comparisons based on "chance").
(http://ktdrr.org/products/update/v1n5/di...tev1n5.pdf)
I've watched Kennedy (who is also a medical doctor) make these kinds of suggestions. But there doesn't seem to be much momentum for this, yet. The improvements made so far have been trivial with respect reducing the production of false results. For example, the idea that journals are willing to publish negative studies doesn't help much if the researchers are using methods which grossly inflate their ability to produce "positive" studies.
Linda