(2019-01-25, 12:55 AM)Laird Wrote: And here's how he continued a couple of paragraphs down (emboldening mine):
You and malf have still not explained what it is about this experiment's methodology - or, in other words, its blinding procedure - that can account for the results in terms of selection bias. The experimenters are very clear that they were blind: that they pre-specified each event's start and end time, and the method of statistical analysis to be used, before looking at the data relating to it.
So, either you guys are maintaining that the experimenters are lying, and that they actually peeked at the data, or you are maintaining that this means of blinding is insufficient, or you are maintaining both. In the latter two cases, could either or both of you please finally explain what is insufficient about this blinding procedure?
I don’t know what’s going on, nobody does. We do not have the equivalent of the Hennacy-Powell video. I guess anything Radin is involved in can be judged against his history and we appear to be exploring that in another thread.
The blinding clearly wasn’t sufficient for the proposed hypothesis, such at was. We know this from Bancel's work, which shows they achieved their desired result in the absence of data to support it. Now perhaps their blinding was undone by ‘experimenter psi’, or something else... which brings us back to Radin’s history.
I still don’t understand how events are chosen. Who does that?
(This post was last modified: 2019-01-25, 02:03 AM by malf.)
A simple "No" would have sufficed, malf. ;-)
(2019-01-25, 12:55 AM)Laird Wrote: So, either you guys are maintaining that the experimenters are lying, and that they actually peeked at the data, or you are maintaining that this means of blinding is insufficient, or you are maintaining both. In the latter two cases, could either or both of you please finally explain what is insufficient about this blinding procedure?
I share your frustration, to the extent that I feel there's a vacuum of critical thinking here and some kind of sensible sceptical criticism of the protocol should be provided.So my attempt to do that is as follows.
By the standards of the 1990s the design of the project was adequate to the extent that it acknowledged the importance of specifying the statistical tests in advance of looking at the data. (And those are tests of the null hypothesis, so their validity isn't dependent on any particular mechanism giving rise to the anomalous behaviour.) The aspect of the design that was flawed - certainly by today's standards, but also in the light of the kind of precautions that used to be taken even as far back as the 1930s and 1940s - was its failure to incorporate any safeguards against questionable research practices, or a mechanism to ensure the statistical tests were unambiguously specified. That's demonstrated by the fact that Bancel found and excluded 13 events where either there was ambiguity in the test or the data had been looked at before the test was specified. So although no one has ever suggested a conventional explanation consistent with the stated protocol, we have no independent assurance that the protocol was observed. That largely depends on trust, which is obviously a great pity. So if anyone were to try to replicate the project, a priority would be automatic procedures to ensure the data remained inaccessible until the statistical tests had been fixed and published. I don't imagine that would be hard to do in practice today.
The following 3 users Like Guest's post:3 users Like Guest's post
• EthanT, malf, Laird
(2019-01-25, 01:58 AM)malf Wrote: I don’t know what’s going on, nobody does. We do not have the equivalent of the Hennacy-Powell video. I guess anything Radin is involved in can be judged against his history and we appear to be exploring that in another thread.
The blinding clearly wasn’t sufficient for the proposed hypothesis, such at was. We know this from Bancel's work, which shows they achieved their desired result in the absence of data to support it. Now perhaps their blinding was undone by ‘experimenter psi’, or something else... which brings us back to Radin’s history.
I still don’t understand how events are chosen. Who does that?
They make a big point about making the data and its position (z-score) easily accessible - both in real time and historically. So there's no question that people are looking at the data (and encouraged to do so). Since this seems to be at odds with how we are interpreting their stated practice, I suspect there is a discrepancy between what we picture (no data peeking) and what was actually done. I don't think Nelson actually says that the data wasn't sometimes looked at informally, only that the outcome statistic and timestamps were specified prior to close examination, for those events which were analyzed. This is opposed to the early days where Radin would try multiple different time periods, blocking intervals and outcome statistics in order to find one which was statistically significant* (as far as I can tell, Radin has never met a QRP he didn't like). It doesn't mean that some of the researchers aren't looking through the data displays prior to analysis, when wondering about which events may be of interest or trying out various other ideas.
At the time the project was started until the last few years, it was generally unrecognized amongst psychology the extent to which apparently quite reasonable practices were creating effects. For example, Nelson describes that they were thinking of changing the outcome variable from the correlated mean shift to the device variance, in 2002 (on the recipes page). So for a period of time they performed both measures when possible, in order to learn more about the question. It's fairly clear that this sort of flexibility will have introduced bias into the event dataset. Had the device variance happened to not perform well (i.e. produce positive results), we likely would have heard no more about it. But it doesn't get mentioned as a source of flexibility, because it wouldn't have been recognized as such. As you say, who knows what else is in there?
http://noosphere.princeton.edu/data/eggsummary/
https://www.heartmath.org/gci/gcms/live-data/
We are short on details as to how events are chosen. It is clear that there are many people involved in proposing events to be analyzed, without any clear indication of why one is chosen, but another not. The informal perusal of the readily available data would help a researcher decide whether or not an event could be considered "global" (for example).
http://noosphere.princeton.edu/meth_p3.html
http://noosphere.princeton.edu/results.html
ETA: I also forgot about the 100-odd events which end up excluded from the main findings. The distinction for those is vague - "for example, an event may be of great interest in one country, but unknown or little attended in the rest of the world." I don't know how that justification applies to Good Friday, for example.
http://noosphere.princeton.edu/res.informal.html
I think it's a bit silly to propose "experimenter psi" without first addressing this gaping hole in the plan. I am also surprised that proponents are behind this idea, given that other examples where non-anomalous knowledge was available, but less obviously so (such as the Hennacy-Powell video you mentioned) were roundly criticized and dismissed.
Linda
*ETA:
" There is one exception to this, namely the Y2K analysis Dean Radin did. He said that he tried several variations before choosing the best representation. That case is included in the formal database, with a Bonferonni correction of a factor of 10."
"Radin used a third order statistic on the variance and computed a sliding median (rather than the sliding average) because the above "worked" and variants did not. In addition it only worked with integer time zones leaving significant others out of the 37 zones. Including just them and keeping Dean's exploratory analysis made the "effect" go away."
http://noosphere.princeton.edu/faq.html
(This post was last modified: 2019-01-27, 02:56 PM by fls.)
Here's a short but quite interesting presentation by Roger Nelson from the Society for Scientific Exploration Conference in 2016 - ten minutes' talk followed by questions. It includes a figure that I don't remember seeing in the published papers, showing an analysis by Peter Bancel concluding that about two thirds of the events showed a positive effect, one sixth no effect, and the other one sixth a "true negative" effect. It would be interesting to see more details of that.
The following 1 user Likes Guest's post:1 user Likes Guest's post
• Laird
(2019-01-26, 12:25 PM)Chris Wrote: Here's a short but quite interesting presentation by Roger Nelson from the Society for Scientific Exploration Conference in 2016 - ten minutes' talk followed by questions. It includes a figure that I don't remember seeing in the published papers, showing an analysis by Peter Bancel concluding that about two thirds of the events showed a positive effect, one sixth no effect, and the other one sixth a "true negative" effect. It would be interesting to see more details of that.
He mentions Bancel’s findings that solstice events used by the experimenters showed an effect, but the solstice events not included showed no effect. Who chose which solstice events to include? How were those decisions made?
(This post was last modified: 2019-01-27, 02:53 AM by malf.)
Just a reminder that the Global Consciousness Project has provided a lot of information on its website. This includes a page with details of who originated each hypothesis and links to further information about individual events, often discussing the thinking behind the hypotheses. If people are interested, they can learn more here:
http://noosphere.princeton.edu/results.html
Incidentally, a comment was made earlier which seemed to imply that Dean Radin was heavily involved in the selection of hypotheses early in the project. As far as the Formal Hypothesis Registry is concerned, that is not the case. Radin is noted as an originator for only 4 out of the 513 hypotheses, 3 of them jointly with Nelson and others. None of these 4 events produced a significant result. One of them (number 81) was among the 13 events excluded by Bancel because they weren't prespecified or were ambiguous.
Radin did publish an additional post hoc analysis of the GPC data for 9/11, but that was separate from the Formal Hypothesis series.
The following 1 user Likes Guest's post:1 user Likes Guest's post
• Laird
(2019-01-27, 09:17 AM)Chris Wrote: Radin is noted as an originator for only 4 out of the 513 hypotheses, 3 of them jointly with Nelson and others.
I think you might have missed a fifth, event #485 - it uses only his first name which perhaps you missed by searching on his surname alone?
But yes, that was one gross distortion among many in that earlier post. I'm glad you corrected it. To correct another: perhaps it has been forgotten that the website offered as proof that real time data is available was demonstrated in an earlier post to have gone live at the earliest only near the very end of the formal experiment, and could at most have been used to "live peek" at the data for the final 7 of the 513 events. An invitation was extended to point out where real time data might have been available elsewhere, however that invitation has so far been ignored.
(2019-01-27, 12:47 PM)Laird Wrote: I think you might have missed a fifth, event #485 - it uses only his first name which perhaps you missed by searching on his surname alone?
But yes, that was one gross distortion among many in that earlier post. I'm glad you corrected it. To correct another: perhaps it has been forgotten that the website offered as proof that real time data is available was demonstrated in an earlier post to have gone live at the earliest only near the very end of the formal experiment, and could at most have been used to "live peek" at the data for the final 7 of the 513 events. An invitation was extended to point out where real time data might have been available elsewhere, however that invitation has so far been ignored.
Thanks. That's well spotted. So there is that one additional event in which Dean (presumably Dean Radin) was involved, which has to be added to the list.
As you say, a lot of gross distortion has been posted here. Or at least a lot of comments that give a grossly distorted impression, even though the words may have been carefully chosen to avoid literal falsehoods.
Another comment gives the impression that the protocol allowed for the data to be looked at before the hypothesis was fixed, provided that it was done "informally" rather than constituting a "close examination".
On the contrary, it has been clearly stated many times that the protocol did not allow for any examination of the data at all. Just the most recent such statement was in the interview I posted a link to here the week before last, where exactly this question was raised (at about 40m):
https://www.youtube.com/watch?v=wZYeHPk7_kU
Mishlove: "I would imagine at this point as you're just starting out to analyse the data, that statisticians and mathematicians will say, well there are many different approaches you could have taken. and you'll likely be accused of selecting post hoc the approach that worked the best."
Nelson:"Right. No, we knew those kinds of questions and in fact we had our own questions about what would be a suitable kind of statistical test and we eventually settled on one primary test and then a couple of others that we might use for special purposes. But all of the data from the beginning even when we were in what we call the pilot - should properly call the pilot phase - the experiment required that we register all the parameters - the beginning of the period of time, the end of the period of time, and the statistical test that would be used - all of that had to be registered before the data were examined in any way. So it's been - that's a blessing, because you can then rely on the interpretation of your statistical outcomes."
Of course people can decide to believe they didn't do this - and we know that in some cases events had to be excluded because the protocol wasn't observed. But there's no doubt that, according to the protocol, the hypotheses were supposed to be fixed before the data were examined in any way.
The following 1 user Likes Guest's post:1 user Likes Guest's post
• Laird
Archived web shot from Nov. 1999. Current webpages only go back to 2015 because that is when everything was moved to a new server.
https://web.archive.org/web/199910130325...ceton.edu/
https://web.archive.org/web/199910130325...ceton.edu/
(This post was last modified: 2019-01-27, 03:57 PM by fls.)
|