2012 double-slit study by Radin et al.

28 Replies, 6271 Views

(2019-02-08, 11:16 PM)Chris Wrote: In fact it seems the author found significant results in the predicted direction in only one of the two datasets. In the other, two statistical tests were applied. One of them showed significant deviations from expectation, but in the direction opposite to that predicted.

I'm not quite sure what is the significance of the direction (positive versus negative) of the deviations found, given there was apparently a coding error, which in itself may affect what the expected results should be. I've only had a cursory look at the paper and can't say much more.

Quote:In 2013, the feedback was inversely proportional to a sliding 3-second span average of the fringe visibility: the higher the line, or the higher the pitch of the tone, the lower was the fringe visibility, the closer was the system to “particle-like” behaviour.

In 2014, due to a coding error, the feedback was inversed: the feedback now increased when the fringe visibility increased. The participant’s task was still to increase the feedback, but this time the higher the line, or the higher the pitch of the tone, the lower was the fringe visibility, the closer was the system to “wave-like” behaviour.
[-] The following 1 user Likes Typoz's post:
  • Laird
(2019-02-09, 07:17 AM)Typoz Wrote: I'm not quite sure what is the significance of the direction (positive versus negative) of the deviations found, given there was apparently a coding error, which in itself may affect what the expected results should be. I've only had a cursory look at the paper and can't say much more.

Thanks. I only looked at the conclusion section last night, and had forgotten about that coding error (referred to in my comment on 20 October 2017 above).

I think that means that in the second set of experiments the results were in line with what the feedback was encouraging. But the snag is that on the quantum-mechanics-motivated interpretation, the effect of "psi observation" should always be to weaken the fringe pattern. It should be a one-way effect that doesn't reverse if the sense of the feedback is reversed.
[-] The following 1 user Likes Guest's post:
  • Laird
(2017-10-19, 07:27 PM)Chris Wrote: If these experiments were showing a consistent effect it might be worth spending the time to work out all the details, but a lot of them don't show significant results, and for those that do, the direction of the effect isn't always the same. The experiment with the largest effect size (a whopping 0.90) shows the interference pattern strengthening when the subjects direct their attention towards it - contrary to the hypothesis and to most of the other significant results.

Chris, I'm wondering how your above earlier comments impact on your view of the (reliability/validity of the) meta-analysis to which Dean refers in the video you linked to here at around the 22:22 mark:

(2019-01-12, 08:38 AM)Chris Wrote: https://youtu.be/nRSBaq3vAeY
(2019-02-11, 07:32 AM)Laird Wrote: Chris, I'm wondering how your above earlier comments impact on your view of the (reliability/validity of the) meta-analysis to which Dean refers in the video you linked to here at around the 22:22 mark:

I think it's probably dangerous to apply a meta-analysis to a set of experiments whose results show so many unexplained inconsistencies and contradictions.
(2019-02-11, 08:25 AM)Chris Wrote: I think it's probably dangerous to apply a meta-analysis to a set of experiments whose results show so many unexplained inconsistencies and contradictions.

Thanks for sharing your view. I still haven't looked at the papers in question so don't have a firm view of my own.
(2019-02-11, 08:25 AM)Chris Wrote: I think it's probably dangerous to apply a meta-analysis to a set of experiments whose results show so many unexplained inconsistencies and contradictions.

A meta analysis is only as good as the meta data being combined.  In other words lead can't be turned into gold.
(2019-02-11, 12:47 PM)Steve001 Wrote: A meta analysis is only as good as the meta data being combined.  In other words lead can't be turned into gold.

Indeed. And if there's a mechanism at work (either anomalous or not) that hasn't been properly characterised - in terms of the variables that influence it, and how exactly it can be expected to manifest itself - then the results of a meta-analysis may be misleading.
[-] The following 1 user Likes Guest's post:
  • Sciborg_S_Patel
(2017-09-21, 10:51 PM)Chris Wrote: A few years later Jeffers (who had been responsible for the York University experiments reported in the joint paper with Ibison) contributed a chapter to "Psi Wars", edited by James E. Alcock, Jean Burns and Anthony Freeman (2004). The chapter is much more sceptical in tone than the joint paper. Most of it can be read in a Google Preview here:
https://books.google.com/books?id=JyfbUv...&lpg=PA135

The part relating to the double-slit experiment (pp. 146, 147) is:

[Image: Jeffers_146.jpg]
[Image: Jeffers_147.jpg]
(The paper by Mathews - actually Matthews - is a general discussion of the use of p values, with no specific reference to this experiment:
https://www.scientificexploration.org/do...tthews.pdf)

James Alcock also refers to this experiment in his own chapter of the same book:
https://books.google.com/books?id=JyfbUv...9&lpg=PA29

The relevant section (pp. 36, 37) is:

[Image: Alcock_36.jpg]
[Image: Alcock_37a.jpg]
[Image: Alcock_37b.jpg]

It's interesting to hear the comments about Jeffers's work by Brenda Dunne, who was the lab manager at PEAR, and Jeffrey Mishlove in this interview (at about 22 minutes). Despite what Alcock says about Jeffers's receptivity to psi, Mishlove had heard from one of his participants that she had been given the impression he didn't expect the experiment to work, and Dunne had got a similar impression when he explained to her how he had introduced participants to the experiment.

Dunne is in no doubt that the "experimenter effect" is just a question of normal psychology, in terms of participants being given a friendly and positive reception by the experimenter.

[-] The following 3 users Like Guest's post:
  • Oleo, Sciborg_S_Patel, Typoz
Courtesy of the SPR Facebook page - here is a report of an attempted replication of Radin's double-slit studies. I don't find the description of the work at all clear, but if I understand correctly, Radin was commissioned to perform the experiments with the provision that the data should be provided to workers at the Phenoscience Laboratories in Berlin - Jan Walleczek and Nikolaus von Stillfried - for blind analysis using pre-specified techniques. (The paper was written by the German workers, and Radin is not a co-author, so presumably his own analysis may follow.) It was published in Frontiers in Psychology:
https://www.frontiersin.org/articles/10....01891/full

The protocol employed the same kind of sessions Radin had used previously, in which an observer was present and there were randomly selected periods of attention and relaxation. But there were also "sham" experiments in which no observer was present, for which the data were analysed in the same way.

The results are rather perplexing. The sessions in which an observer was present produced no statistically significant results. But in the sham experiments, one comparison did produced a statistically significant result (Z=2.02). Unless this was just a chance finding, the suggestion seems to be that some kind of artefact originating either in the experimental apparatus or in the data analysis may be producing spurious positive results.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 4 Guest(s)