Psience Quest

Full Version: 6.37 sigma replication of Dean Radin's double slit consciousness experiments
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
(2017-11-02, 08:53 AM)Chris Wrote: [ -> ]No, I haven't contacted Radin. I just wondered whether anyone here had any thoughts on it.

I think the only pre-Radin double-slit psi experiments were those published by Ibison and Jeffers in 1998:
http://psiencequest.net/forums/thread-27...ml#pid6581

They just looked at the heights of the central peak and the adjacent troughs of the interference pattern. They gave very little discussion of how they expected psychical observation to affect those heights, though evidently they expected the fringe pattern to weaken, so that the peak height would decrease and the trough heights would increase. But it seems to me that if diffraction as well as interference were affected, that might tend to increase the peak height.

I’d reach out to him and ask. He referenced some older studies in a talk he gave.

Chris

A third version of the preprint is now available:
https://osf.io/zsgwp/

I have read only the abstract so far, which says that based on the previous studies (now treated as exploratory), a further study of 80 sessions was pre-registered, to be tested using a directional hypothesis. But the results were not statistically significant. However, a post hoc bi-directional hypothesis gave significant results for the 80 sessions (2.75 sigma) and for all 240* sessions to date (4.73 sigma). 

Clearly, if this is a real effect, there is a difficulty in predicting its direction. This was the case with previous versions of the preprint, where the effect appeared to operate in the direction opposite to the one expected on the basis of model calculations [though in some cases I'm not convinced the expected direction had been calculated correctly]. The abstract says that 240 control sessions, conducted without an observer present, continued to give non-significant results on both directional and bi-directional hypotheses. So despite the difficulty in predicting the direction, it's still possible there's a real effect there.

(* Edit: Apparently this figure actually relates to 180 sessions, because the first experiment of 60 sessions, which was used to optimise the variables V1 and V2, was excluded.)

Chris

(2018-02-18, 09:54 AM)Chris Wrote: [ -> ]A third version of the preprint is now available:
https://osf.io/zsgwp/

I have read only the abstract so far, which says that based on the previous studies (now treated as exploratory), a further study of 80 sessions was pre-registered, to be tested using a directional hypothesis. But the results were not statistically significant. However, a post hoc bi-directional hypothesis gave significant results for the 80 sessions (2.75 sigma) and for all 240* sessions to date (4.73 sigma). 

Clearly, if this is a real effect, there is a difficulty in predicting its direction. This was the case with previous versions of the preprint, where the effect appeared to operate in the direction opposite to the one expected on the basis of model calculations [though in some cases I'm not convinced the expected direction had been calculated correctly]. The abstract says that 240 control sessions, conducted without an observer present, continued to give non-significant results on both directional and bi-directional hypotheses. So despite the difficulty in predicting the direction, it's still possible there's a real effect there.

(* Edit: Apparently this figure actually relates to 180 sessions, because the first experiment of 60 sessions, which was used to optimise the variables V1 and V2, was excluded.)


Having read the new version, I'm not really any clearer about what's going on. The four new experiments are based on the final two in the previous version, which were the ones that gave the strongest results. 

In these experiments, the feedback given to the participants would have encouraged them either to increase or decrease a variable based on the amplitudes of the first few terms in a Fourier series representation of the measured interference pattern (that is, the amplitudes of the wave-like components with the longest wavelengths, reflecting the large-scale shape of the interference pattern). But the response was analysed using two different variables, V1 and V2, which are defined in terms of  -the phases (i.e. the sideways shifts of the wave-like components) for two ranges of smaller wavelengths (intermediate between the lengthscale of the overall interference pattern, and the wavelength of the interference fringes).

As the feedback variable is different from the variables used to analyse the response, it's not necessarily unexpected that the response could vary in direction. It might be that there were different ways in which the pattern could change, in which the signs of the phase shifts were different, but the changes in the amplitudes of the leading terms were in the same sense. (Guerrer used a theoretical model for motivation, which predicted the sign of V1 - though not of V2 - for given feedback. But as the predicted sign in the final two exploratory experiments was the opposite of that measured, the model doesn't seem to reflect what's happening.) Indeed, as the double-slit system is nearly left-right symmetrical, it's not hard to imagine that the pattern could change in such a way that a change in the sign of the phase shifts (and thus of V1 and V2) could leave the sign of the amplitudes (and thus of the feedback variable) unchanged.

But we're still left with a situation in which, for example, in experiment 5 (20 participants) both V1 and V2 show very significant decreases (Z=-3.15 and -2.87), whereas in experiment 9 (also 20 participants, with the same feedback), they both show significant increases (Z=2.23 and 3.00). Perhaps it could be that these significant results in opposite directions are produced by only a small number of participants, which might make them statistically reconcilable. Perhaps it could be some kind of experimental artefact occurring in only a small number of sessions, but it's not easy to imagine how it would work.

One other interesting finding is that pooling experiments 5, 7 and 9 (which share the same feedback), there is a very significant correlation between the observed changes in V1 and V2 (p=0.01). But for experiments 4, 6 and 8 (in which the feedback acts in the opposite sense), the changes in V1 and V2 are significantly negatively correlated (also p=0.01). Again, given the approximate left-right symmetry of the system, it's not too hard to imagine that the directions of steepest (1) increase and (2) decrease of the feedback variable could correspond to the changes in V1 and V2 having (1) the same and (2) opposite signs. But it's hard to see why this change in the correlation according to the sense of the feedback should occur if this is an experimental artefact rather than a psi effect.

Perhaps more could be gleaned from a closer look at the experimental data. It's good that the data can be downloaded from the same OSF website where the preprint is hosted.

Chris

(2018-03-01, 01:54 PM)Max_B Wrote: [ -> ]Briefly scanned the paper, no mention of sound vibration, no measurement of sound vibration either etc. etc.   Not quite sure what the feedback is, but these quotes taken from the paper point at a a potential problem which has to be ruled out first...

"They are then asked to put on the noise-canceling headphones"

"An automatic timer triggers a control session that starts 10 minutes later, running on the exact same computer code but with no person present in the experimental room. Before the control session start the experimenter ensures that the experiment al room lights are off, and places the headphones on the chair."

"control and the participant data cannot be equally classified because of the participant’s bodily presence in the experimental room"

"Concerning the physical mechanisms that could lead to artifacts in the participant data, heat is a monitored quantity, and no sensor resulted in both a globally significant differential score and a significant correlation to the variables of interest. Also, in experiment 0 a lamp producing more heat than a human body replaced the participant, demonstrating that a temperature increase in the experimental room cannot account for the measured effects. Even in the case of minor leakages, the oscillatory nature of vibration is more likely to introduce noise into the measurements than a direction-consistent variation that could mimic a signal"

I can believe there are various differences between the control sessions and the sessions with participants. The audio feedback will be one, because there will surely be more noise from the headphones sitting on the chair than from the headphones on someone's head. Temperature will be another, and perhaps humidity and the composition of the air in the room.

But I think the problem with looking for an explanation in terms of environmental factors is that, in a sense, each session is acting as its own control. What's being measured is the difference between alternative periods of intention and relaxation. So factors that are the same for both periods aren't going to drive that difference in a particular direction. 

The feedback does differ between the intention and relaxation periods. But the visual feedback differs in the same way in the control sessions, and the audio feedback will be louder in the control sessions. So - as the control sessions don't show the same departures from chance as the participant sessions - it's difficult to believe the feedback is responsible.

One possibility might be that when there is a participant in the room, environmental factors might increase the general variability of the signal from second to second. But the problem there is that the nonparametric "bootstrap" test that is used to characterise the response would appear to be insensitive to such an increase in variability, because the average difference between intention and relaxation periods is divided by a measure of the overall variability.

Then there is that correlation between the two measured variables, which becomes an anti-correlation when the sense of the feedback is reversed. The feedback variable was found not to vary significantly from chance between intention and relaxation periods (at least in the original set of experiments). So it's difficult to imagine an experimental artefact that could reverse the sign of the correlation when the sense of the feedback was changed.

At the moment, the only way I can imagine for the results to be an artefact is if the effect - caused by something some participants are doing during intention periods but not relaxation periods - is a large one, limited to a relatively small number of sessions. That might mean we were really dealing with a smaller number of events than the number of participants would indicate, so that the differences between average intention and relaxation measurements, and the correlations between the measured variables, might be less statistically significant than they appear.
(2017-09-05, 01:47 PM)Max_B Wrote: [ -> ]This paper needs to show that they have at least excluded vibration (not exhaustive) as a cause of changes to fringe measurements, before suggesting they have found a new force in nature. 

I think this is the point that a lot of people are trying to make.  It isn't a "new force" they've found.  And its a force  which has been found relevant in all sorts of QM experiments including the Quantum eraser etc. for decades and decades.  Unless I'm misunderstanding you.  In which case I apologize.
(2017-09-09, 02:07 AM)Kamarling Wrote: [ -> ]I have no formal STEM education either so I make no pretense of knowing how wave collapse is brought about but, from my limited understanding, it seems that consciousness is always required to determine whether the wave has collapsed or not. Isn't that the point of the delayed choice eraser experiment?

[See below for video link]

Tom Campbell's belief/assertion (which is not at all without foundation) is that it is precisely the AVAILABILITY of 100 percent accurate information concerning specific which way information that collapses the wave.   This may come in the form of direct observation or the recording of the information etc.  What the eraser has demonstrated is that you can record the information and then delete it before anybody has a chance to view it.  And when that is done, the wave function is maintained, DESPITE all the information gathering systems still being turned on.  It's the availability of accurate information towards a conscious participant.  I tend to subscribe to this view as well, but I don't know much about these things so....

Chris

I've just seen that there was a further revision to the preprint on 9 March. The fourth (and final?) version is here:
https://osf.io/zsgwp/

The effect of the revisions is to avoid claiming confirmation of the existence of the phenomenon, but just to say that the findings warrant further investigation. In the abstract, an overall figure of Z=4.73 - obtained by applying a post hoc hypothesis to both the exploratory and the formal trials - has been removed, and the final sentence, which previously ran

"These results provide partial support for the previously claimed existence of anomalous interactions between conscious agents and a physical system."

has been replaced by 

"While the pre-registered analysis did not support the existence of the investigated phenomenon, the post hoc findings warrant further investigation to formally test the bi-directional hypothesis."
Sorry for my ignorance, i have no real scientifc knowledge but i am interested in this replication. Whats the conclusion, in simple words, of the last paper published on march 9?

Chris

(2019-01-07, 12:23 PM)Krm Wrote: [ -> ]Sorry for my ignorance, i have no real scientifc knowledge but i am interested in this replication. Whats the conclusion, in simple words, of the last paper published on march 9?

It means that when he tried to suggest in advance what would happen, and then tested his idea experimentally, the results didn't bear it out. But on the basis of the experimental results he is now suggesting a different idea, which he thinks would be worth testing by further experiments.

(Strictly speaking this isn't a paper because it hasn't been peer-reviewed or published in a journal. It's a preprint that has been "self-published" online.)
(2019-01-07, 12:44 PM)Chris Wrote: [ -> ]It means that when he tried to suggest in advance what would happen, and then tested his idea experimentally, the results didn't bear it out. But on the basis of the experimental results he is now suggesting a different idea, which he thinks would be worth testing by further experiments.

(Strictly speaking this isn't a paper because it hasn't been peer-reviewed or published in a journal. It's a preprint that has been "self-published" online.)

I just dont understand how come the first time he replicated it looked like the sigma was very high and then suddenly the results seems giving no evidence of the original Dean Radin study.

Based on the replication alone, nothing weird happened at all or anyway something weird has been noted?

Thanks again for your answer
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26