Courtesy of the SPR Facebook page - UKrant, a news website for the University of Groningen in the Netherlands, has a report on an interesting online experiment run by Jacob Jolij of that university:
https://www.ukrant.nl/parapsychological-...t/?lang=en
Participants were presented with ten sets of random numbers and had to say whether they saw anything meaningful in them - for example a date, a phone number or some pattern in the numbers. In fact half the numbers were pseudo-random (generated by a computer program) and half were truly random (generated by hardware by quantum tunnelling).
With nearly 300 participants, he found that they saw meaning in the true random numbers more often than in the pseudo-random numbers. The associated p value was 0.0013, so it wasn't a marginally significant effect.
(Then there's a particularly wrong-headed statistical commentary, saying that "if you have enough people participating in a certain experiment you’ll always end up with a significant p value," but that Jolij had then used Bayesian statistics, and that had shown "there is a difference"! Naturally, the Bayesian analysis produced two different answers, depending on assumptions ...)
The article says the experiment is still running, but when I tried it I had trouble going from page to page of random numbers:
https://rug.eu.qualtrics.com/jfe/form/SV...AG6hdImsbb
https://www.ukrant.nl/parapsychological-...t/?lang=en
Participants were presented with ten sets of random numbers and had to say whether they saw anything meaningful in them - for example a date, a phone number or some pattern in the numbers. In fact half the numbers were pseudo-random (generated by a computer program) and half were truly random (generated by hardware by quantum tunnelling).
With nearly 300 participants, he found that they saw meaning in the true random numbers more often than in the pseudo-random numbers. The associated p value was 0.0013, so it wasn't a marginally significant effect.
(Then there's a particularly wrong-headed statistical commentary, saying that "if you have enough people participating in a certain experiment you’ll always end up with a significant p value," but that Jolij had then used Bayesian statistics, and that had shown "there is a difference"! Naturally, the Bayesian analysis produced two different answers, depending on assumptions ...)
The article says the experiment is still running, but when I tried it I had trouble going from page to page of random numbers:
https://rug.eu.qualtrics.com/jfe/form/SV...AG6hdImsbb