6.37 sigma replication of Dean Radin's double slit consciousness experiments

334 Replies, 49974 Views

This post has been deleted.
(2018-03-01, 01:54 PM)Max_B Wrote: Briefly scanned the paper, no mention of sound vibration, no measurement of sound vibration either etc. etc.   Not quite sure what the feedback is, but these quotes taken from the paper point at a a potential problem which has to be ruled out first...

"They are then asked to put on the noise-canceling headphones"

"An automatic timer triggers a control session that starts 10 minutes later, running on the exact same computer code but with no person present in the experimental room. Before the control session start the experimenter ensures that the experiment al room lights are off, and places the headphones on the chair."

"control and the participant data cannot be equally classified because of the participant’s bodily presence in the experimental room"

"Concerning the physical mechanisms that could lead to artifacts in the participant data, heat is a monitored quantity, and no sensor resulted in both a globally significant differential score and a significant correlation to the variables of interest. Also, in experiment 0 a lamp producing more heat than a human body replaced the participant, demonstrating that a temperature increase in the experimental room cannot account for the measured effects. Even in the case of minor leakages, the oscillatory nature of vibration is more likely to introduce noise into the measurements than a direction-consistent variation that could mimic a signal"

I can believe there are various differences between the control sessions and the sessions with participants. The audio feedback will be one, because there will surely be more noise from the headphones sitting on the chair than from the headphones on someone's head. Temperature will be another, and perhaps humidity and the composition of the air in the room.

But I think the problem with looking for an explanation in terms of environmental factors is that, in a sense, each session is acting as its own control. What's being measured is the difference between alternative periods of intention and relaxation. So factors that are the same for both periods aren't going to drive that difference in a particular direction. 

The feedback does differ between the intention and relaxation periods. But the visual feedback differs in the same way in the control sessions, and the audio feedback will be louder in the control sessions. So - as the control sessions don't show the same departures from chance as the participant sessions - it's difficult to believe the feedback is responsible.

One possibility might be that when there is a participant in the room, environmental factors might increase the general variability of the signal from second to second. But the problem there is that the nonparametric "bootstrap" test that is used to characterise the response would appear to be insensitive to such an increase in variability, because the average difference between intention and relaxation periods is divided by a measure of the overall variability.

Then there is that correlation between the two measured variables, which becomes an anti-correlation when the sense of the feedback is reversed. The feedback variable was found not to vary significantly from chance between intention and relaxation periods (at least in the original set of experiments). So it's difficult to imagine an experimental artefact that could reverse the sign of the correlation when the sense of the feedback was changed.

At the moment, the only way I can imagine for the results to be an artefact is if the effect - caused by something some participants are doing during intention periods but not relaxation periods - is a large one, limited to a relatively small number of sessions. That might mean we were really dealing with a smaller number of events than the number of participants would indicate, so that the differences between average intention and relaxation measurements, and the correlations between the measured variables, might be less statistically significant than they appear.
(2017-09-05, 01:47 PM)Max_B Wrote: This paper needs to show that they have at least excluded vibration (not exhaustive) as a cause of changes to fringe measurements, before suggesting they have found a new force in nature. 

I think this is the point that a lot of people are trying to make.  It isn't a "new force" they've found.  And its a force  which has been found relevant in all sorts of QM experiments including the Quantum eraser etc. for decades and decades.  Unless I'm misunderstanding you.  In which case I apologize.
(2017-09-09, 02:07 AM)Kamarling Wrote: I have no formal STEM education either so I make no pretense of knowing how wave collapse is brought about but, from my limited understanding, it seems that consciousness is always required to determine whether the wave has collapsed or not. Isn't that the point of the delayed choice eraser experiment?

[See below for video link]

Tom Campbell's belief/assertion (which is not at all without foundation) is that it is precisely the AVAILABILITY of 100 percent accurate information concerning specific which way information that collapses the wave.   This may come in the form of direct observation or the recording of the information etc.  What the eraser has demonstrated is that you can record the information and then delete it before anybody has a chance to view it.  And when that is done, the wave function is maintained, DESPITE all the information gathering systems still being turned on.  It's the availability of accurate information towards a conscious participant.  I tend to subscribe to this view as well, but I don't know much about these things so....
(This post was last modified: 2018-04-22, 10:23 PM by Wormwood.)
[-] The following 2 users Like Wormwood's post:
  • Laird, Kamarling
This post has been deleted.
I've just seen that there was a further revision to the preprint on 9 March. The fourth (and final?) version is here:
https://osf.io/zsgwp/

The effect of the revisions is to avoid claiming confirmation of the existence of the phenomenon, but just to say that the findings warrant further investigation. In the abstract, an overall figure of Z=4.73 - obtained by applying a post hoc hypothesis to both the exploratory and the formal trials - has been removed, and the final sentence, which previously ran

"These results provide partial support for the previously claimed existence of anomalous interactions between conscious agents and a physical system."

has been replaced by 

"While the pre-registered analysis did not support the existence of the investigated phenomenon, the post hoc findings warrant further investigation to formally test the bi-directional hypothesis."
Sorry for my ignorance, i have no real scientifc knowledge but i am interested in this replication. Whats the conclusion, in simple words, of the last paper published on march 9?
(2019-01-07, 12:23 PM)Krm Wrote: Sorry for my ignorance, i have no real scientifc knowledge but i am interested in this replication. Whats the conclusion, in simple words, of the last paper published on march 9?

It means that when he tried to suggest in advance what would happen, and then tested his idea experimentally, the results didn't bear it out. But on the basis of the experimental results he is now suggesting a different idea, which he thinks would be worth testing by further experiments.

(Strictly speaking this isn't a paper because it hasn't been peer-reviewed or published in a journal. It's a preprint that has been "self-published" online.)
[-] The following 1 user Likes Guest's post:
  • Ninshub
(2019-01-07, 12:44 PM)Chris Wrote: It means that when he tried to suggest in advance what would happen, and then tested his idea experimentally, the results didn't bear it out. But on the basis of the experimental results he is now suggesting a different idea, which he thinks would be worth testing by further experiments.

(Strictly speaking this isn't a paper because it hasn't been peer-reviewed or published in a journal. It's a preprint that has been "self-published" online.)

I just dont understand how come the first time he replicated it looked like the sigma was very high and then suddenly the results seems giving no evidence of the original Dean Radin study.

Based on the replication alone, nothing weird happened at all or anyway something weird has been noted?

Thanks again for your answer
(2019-01-07, 01:28 PM)Krm Wrote: I just dont understand how come the first time he replicated it looked like the sigma was very high and then suddenly the results seems giving no evidence of the original Dean Radin study.

Based on the replication alone, nothing weird happened at all or anyway something weird has been noted?

The problem was that in the original version the statistical tests were formulated after the experimental results were known, and actually included an element of optimisation, which artificially produced the appearance that they were highly significant.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 5 Guest(s)