New meta-analysis of dream-ESP studies

13 Replies, 1673 Views

Courtesy of the SPR Facebook page and Carlos S. Alvarado's blog, here is a new meta-analysis of dream-ESP studies.

On the correspondence between dream content and target material under laboratory conditions: A meta-analysis of dream-ESP studies, 1966-2016
Lance Storm, Simon J. Sherwood, Christopher A. Roe, Patrizio E. Tressoldi, Adam J. Rock, Lorenzo Di Risio
International Journal of Dream Research, 10(2), 120-140 (2017)

The full paper can be downloaded here:
http://journals.ub.uni-heidelberg.de/ind.../34888/pdf

Here is the abstract:
"In order to further our understanding about the limits of human consciousness and the dream state, we report meta-analytic results on experimental dream-ESP studies for the period 1966 to 2016. Dream-ESP can be defined as a form of extra-sensory perception (ESP) in which a dreaming perceiver ostensibly gains information about a randomly selected target without using the normal sensory modalities or logical inference. Studies fell into two categories: the Maimonides Dream Lab (MDL) studies (n = 14), and independent (non-MDL) studies (n = 36). The MDL dataset yielded mean ES = .33 (SD = 0.37); the non-MDL studies yielded mean ES = .14 (SD = 0.27). The difference between the two mean values was not significant. A homogeneous dataset (N = 50) yielded a mean z of 0.75 (ES = .20, SD = 0.31), with corresponding significant Stouffer Z = 5.32, p = 5.19 × 10-8, suggesting that dream content can be used to identify target materials correctly and more often than would be expected by chance. No significant differences were found between: (a) three modes of ESP (telepathy, clairvoyance, precognition), (b) senders, (c) perceivers, or (d) REM/non-REM monitoring. The ES difference between dynamic targets (e.g., movie-film) and static (e.g., photographs) targets approached significance. We also found that significant improvements in the quality of the studies was not related to ES, but ES did decline over the 51-year period. Bayesian analysis of the same homogeneous dataset yielded results supporting the ‘frequentist’ finding that the null hypothesis should be rejected. We conclude that the dream-ESP paradigm in parapsychology is worthy of continued investigation, but we recommend design improvements."
[-] The following 3 users Like Guest's post:
  • stephenw, Hurmanetar, Ninshub
(2017-11-26, 09:14 AM)Chris Wrote: Courtesy of the SPR Facebook page and Carlos S. Alvarado's blog, here is a new meta-analysis of dream-ESP studies.

On the correspondence between dream content and target material under laboratory conditions: A meta-analysis of dream-ESP studies, 1966-2016
Lance Storm, Simon J. Sherwood, Christopher A. Roe, Patrizio E. Tressoldi, Adam J. Rock, Lorenzo Di Risio
International Journal of Dream Research, 10(2), 120-140 (2017)

The full paper can be downloaded here:
http://journals.ub.uni-heidelberg.de/ind.../34888/pdf

Here is the abstract:
"In order to further our understanding about the limits of human consciousness and the dream state, we report meta-analytic results on experimental dream-ESP studies for the period 1966 to 2016. Dream-ESP can be defined as a form of extra-sensory perception (ESP) in which a dreaming perceiver ostensibly gains information about a randomly selected target without using the normal sensory modalities or logical inference. Studies fell into two categories: the Maimonides Dream Lab (MDL) studies (n = 14), and independent (non-MDL) studies (n = 36). The MDL dataset yielded mean ES = .33 (SD = 0.37); the non-MDL studies yielded mean ES = .14 (SD = 0.27). The difference between the two mean values was not significant. A homogeneous dataset (N = 50) yielded a mean z of 0.75 (ES = .20, SD = 0.31), with corresponding significant Stouffer Z = 5.32, p = 5.19 × 10-8, suggesting that dream content can be used to identify target materials correctly and more often than would be expected by chance. No significant differences were found between: (a) three modes of ESP (telepathy, clairvoyance, precognition), (b) senders, (c) perceivers, or (d) REM/non-REM monitoring. The ES difference between dynamic targets (e.g., movie-film) and static (e.g., photographs) targets approached significance. We also found that significant improvements in the quality of the studies was not related to ES, but ES did decline over the 51-year period. Bayesian analysis of the same homogeneous dataset yielded results supporting the ‘frequentist’ finding that the null hypothesis should be rejected. We conclude that the dream-ESP paradigm in parapsychology is worthy of continued investigation, but we recommend design improvements."

Was going to post this myself.

The authors are careful not to try and overstate their conclusions, and recognise that higher quality studies are needed.  But while they acknowledge that skeptics will take issue with how small the effect size is they seem to handwave that away by bringing up the aspirin study, without mentioning that the aspirin study was based on a large sample size and the studies in this metaanalysis are pretty small for the most part.

But taking a look at the biggest studies in the sample I can't get over just how small an effect we're talking about!  Let's look at some of the numbers:

Honore et al: n=203, hits: 105.  Hit rate: 51.72% (chance = 50%)
Luke and Zychowicz: n=268, hits: 69, Hit rate: 25.74% (chance = 25%)
Luke et al: n=143, hits 33, hit rate: 23.07%, (chance = 25%)
Van de Castle: n=150, hits 95, hit rate: 63.33% (chance = 50%)


I'm not sure whether these studies are considered sufficiently powered or not, but with the exception of the van de castle we are pretty well right at chance.   There were a coouple at n=100 that were right at chance as well.  The rest of the studies were for the most part much small with Ns as low as 7 or 8 (some may have been excluded I haven't mapped it out.

I don't have have tremendous difficulty with the suggestion to do better, higher quality studies, but when we're talking about effects that are this small, with a metastudy dominated by underpowered studies I would think the authors should be even less enthusiastic about the psi hypothesis here.

Also quite interesting that they find that the star subjects didn't do statistically better than the average joe subjects.

I'll have to go over it more closely but these are some initial thoughts I had going through it.
(2017-11-26, 11:34 AM)Arouet Wrote: Was going to post this myself.

The authors are careful not to try and overstate their conclusions, and recognise that higher quality studies are needed.  But while they acknowledge that skeptics will take issue with how small the effect size is they seem to handwave that away by bringing up the aspirin study, without mentioning that the aspirin study was based on a large sample size and the studies in this metaanalysis are pretty small for the most part.

But taking a look at the biggest studies in the sample I can't get over just how small an effect we're talking about!  Let's look at some of the numbers:

Honore et al: n=203, hits: 105.  Hit rate: 51.72% (chance = 50%)
Luke and Zychowicz: n=268, hits: 69, Hit rate: 25.74% (chance = 25%)
Luke et al: n=143, hits 33, hit rate: 23.07%, (chance = 25%)
Van de Castle: n=150, hits 95, hit rate: 63.33% (chance = 50%)


I'm not sure whether these studies are considered sufficiently powered or not, but with the exception of the van de castle we are pretty well right at chance.   There were a coouple at n=100 that were right at chance as well.  The rest of the studies were for the most part much small with Ns as low as 7 or 8 (some may have been excluded I haven't mapped it out.

I don't have have tremendous difficulty with the suggestion to do better, higher quality studies, but when we're talking about effects that are this small, with a metastudy dominated by underpowered studies I would think the authors should be even less enthusiastic about the psi hypothesis here.

Also quite interesting that they find that the star subjects didn't do statistically better than the average joe subjects.

I'll have to go over it more closely but these are some initial thoughts I had going through it.

I haven't had a chance to read this yet, and it's likely to be a while before I have time to do so.

But looking at the table, I was actually surprised by how large some of the studies were. In a review published by two of the authors, Sherwood and Roe, in 2003, the largest non-Maimonides studies were two with n=100 (the others had n=50 or fewer). If I understand correctly, the main problem with the Maimonides studies was that the protocol was so labour-intensive (and obviously involved working at night). The Ganzfeld was a more convenient alternative. So I shall be interested to see whether those large studies came anywhere near to the painstaking Maimonides protocol. (The Honorton paper from 1972 was counted as a Maimonides study by Sherwood and Roe, but it was so much larger than the rest that I presume the protocol was different.)

Incidentally, in addition to the ones you quote, the table also lists a study by Watt (2014) with n=200 and a Z value of 2.2.
(2017-11-26, 12:46 PM)Chris Wrote: I haven't had a chance to read this yet, and it's likely to be a while before I have time to do so.

But looking at the table, I was actually surprised by how large some of the studies were. In a review published by two of the authors, Sherwood and Roe, in 2003, the largest non-Maimonides studies were two with n=100 (the others had n=50 or fewer). If I understand correctly, the main problem with the Maimonides studies was that the protocol was so labour-intensive (and obviously involved working at night). The Ganzfeld was a more convenient alternative. So I shall be interested to see whether those large studies came anywhere near to the painstaking Maimonides protocol. (The Honorton paper from 1972 was counted as a Maimonides study by Sherwood and Roe, but it was so much larger than the rest that I presume the protocol was different.)

Incidentally, in addition to the ones you quote, the table also lists a study by Watt (2014) with n=200 and a Z value of 2.2.

The what study was excluded because it used a different protocol according to the authors, but they wrote that if included it wouldn't change the results much.

Re: the larger studies if I understand things correctly at that effect size only the 268 study would be sufficiently powered, or close enough (I'm guessing based on how Kennedy explained the power necessary for the ganzfeld effect size, which was bigger). 

I understand there can be very good (ie: financial) reasons for underpowered studies, but we still have to deal with the results we have.  For the most part from my lay view, it looks like the bigger the study the more likely the effect all but disappears.  Or maybe there is something I'm missing.
(2017-11-26, 01:45 PM)Arouet Wrote: The what study was excluded because it used a different protocol according to the authors, but they wrote that if included it wouldn't change the results much.

No - those comments relate to Watt, Wiseman and Vuillaume (2015). They include the Watt study I referred to (2014), and indeed they regard to it as one of the "short-list of impressive precognition studies".
(2017-11-26, 01:55 PM)Chris Wrote: No - those comments relate to Watt, Wiseman and Vuillaume (2015). They include the Watt study I referred to (2014), and indeed they regard to it as one of the "short-list of impressive precognition studies".

Ahh, didn't notice that they were talking about two different papers (and was lead astray even more because they mentioned the Watt paper was not peer reviewed but I see now they were just referring to a note in the paper),

I'll have to look at that paper and it should be added to the list of bigger studies,  But I think we have to look closely at these studies.  This is an issue I have with ganzfeld as well: the results tend to be all over the place.   I'm not sure what to do with that but it raises questions for me.  Also the heterogenity of the studies: how should this impact how we view the results?
(2017-11-26, 02:17 PM)Arouet Wrote: Ahh, didn't notice that they were talking about two different papers (and was lead astray even more because they mentioned the Watt paper was not peer reviewed but I see now they were just referring to a note in the paper),

I'll have to look at that paper and it should be added to the list of bigger studies,  But I think we have to look closely at these studies.  This is an issue I have with ganzfeld as well: the results tend to be all over the place.   I'm not sure what to do with that but it raises questions for me.  Also the heterogenity of the studies: how should this impact how we view the results?

What do you mean by 'the Ganzfeld results are all over the place'?
We should leave ganzfeld for another thread else this one will be derailed.
This post has been deleted.
I don't know whether anyone has actually tried to read this paper, but if anyone has ...

Can someone explain what the authors mean by:
(i) Same perceiver studies versus different perceiver studies,
(ii) Single perceiver studies versus multiple perceiver studies and
(iii) Single subject (i.e. 1 percipient) studies versus multiple perceiver studies?

Apparently these are meant to be three different distinctions. I understand what the third means, but not the first two. I keep hoping the authors will explain themselves, but I've just read the relevant part of the results section, and I'm still in the dark.

  • View a Printable Version


Users browsing this thread: 1 Guest(s)