Psience Quest

Full Version: The Global Consciousness Project
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
(2019-01-19, 12:53 PM)Max_B Wrote: [ -> ]Ian and Laird had already decided to defray their past expenses with the first donations, I thought that was an unwise decision for themselves, as that was their original stake in the site.

(2019-01-19, 12:53 PM)Max_B Wrote: [ -> ]when I later saw Malf and Linda’s contributions, I realised Laird might feel a little constrained by the donations.

(2019-01-19, 12:53 PM)Max_B Wrote: [ -> ]clearly something had motivated such a long post.

Have responded to these three quotes in the "Donations are now possible" thread.
Linda, I don't think there's value for us in further arguing back-and-forth, so I'll leave it at that.
(2019-01-17, 05:41 PM)malf Wrote: [ -> ]In terms of competing hypotheses, one must be that, once presented with colossal amounts of noisy data, mathematics can always produce some statistical significance. 

If Bancel’s comments can be trusted we know the one thing the GCP is not measuring is GC.

It might be helpful if others suggested their favoured hypotheses in the next few posts?

I've been thinking about this in relation to Gelman's and Loken's "Garden of Forking Paths".
http://www.stat.columbia.edu/~gelman/res...acking.pdf

They refer to 4 different scenarios:
1. There is no flexibility.
2. There is flexibility, but no use can be made of it because of pre-registration.
3. There is flexibility and use can be made of it, but there is no fishing.
4. There is flexibility and there is fishing.

They focus on #3 reported as though it is #2. The assumption is that there will be an inflation of falsely significant results under #3, but not under #2. This parallels the claim made by Nelson, et. al.

This leads to two questions with respect to the GCP:
Are they acting under #2 vs. #3?
Is there no difference in the number of falsely significant results under #2 vs. #1?

Bancel's and other analysis shows us that there is a difference between the results you would obtain under #1 and the results of the GCP. That is, there is a difference in the results obtained when flexibility is present and when it is absent. For example, while there was flexibility at the beginning of the process in how the New Year's Eve data would be blocked and analyzed, for a number of years there has been no flexibility, as it is necessarily analyzed the same way each time. And the cumulative results are definitively non-significant (http://noosphere.princeton.edu/events/newyear.2015.html). Bancel (https://www.researchgate.net/publication...xploration) also showed that the choices made in the presence of flexibility, but without the purported ability to use that flexibility (i.e. "formal hypotheses"), among choices of which identical events to include, test statistics, and blocking intervals gave a different result than that expected if flexibility truly could not be used. And he also demonstrated a correlation between the measured correlations and the timestamp errors, whereas there should be no correlation in the setting of random error unless there is a process which selects for fortuitous timestamp errors.

So either they are really acting under #3, or there is an interesting phenomenon where there are differences between #2 vs. #1. Probably most non-proponents just assume that they are acting under #3, given that there isn't anything to prevent it. As you mention in a later post, there are ways to tighten this up. It would be interesting to see what happens if they do that. If the effect remains, it still won't be Global Consciousness, but it would seem to be anomalous in some way.

Linda
(2019-01-21, 12:55 AM)fls Wrote: [ -> ]
I've been thinking about this in relation to Gelman's and Loken's "Garden of Forking Paths".
http://www.stat.columbia.edu/~gelman/res...acking.pdf

They refer to 4 different scenarios:
1. There is no flexibility.
2. There is flexibility, but no use can be made of it because of pre-registration.
3. There is flexibility and use can be made of it, but there is no fishing.
4. There is flexibility and there is fishing.

They focus on #3 reported as though it is #2. The assumption is that there will be an inflation of falsely significant results under #3, but not under #2. This parallels the claim made by Nelson, et. al.

This leads to two questions with respect to the GCP:
Are they acting under #2 vs. #3?
Is there no difference in the number of falsely significant results under #2 vs. #1?

Bancel's and other analysis shows us that there is a difference between the results you would obtain under #1 and the results of the GCP. That is, there is a difference in the results obtained when flexibility is present and when it is absent. For example, while there was flexibility at the beginning of the process in how the New Year's Eve data would be blocked and analyzed, for a number of years there has been no flexibility, as it is necessarily analyzed the same way each time. And the cumulative results are definitively non-significant (http://noosphere.princeton.edu/events/newyear.2015.html). Bancel also showed that the choices made in the presence of flexibility, but without the purported ability to use that flexibility (i.e. "formal hypotheses"), among choices of which identical events to include, test statistics, and blocking intervals gave a different result than that expected if flexibility truly could not be used. And he also demonstrated a correlation between the measured correlations and the timestamp errors, whereas there should be no correlation in the setting of random error unless there is a process which selects for fortuitous timestamp errors.

So either they are really acting under #3, or there is an interesting phenomenon where there are differences between #2 vs. #1. Probably most non-proponents just assume that they are acting under #3, given that there isn't anything to prevent it. As you mention in a later post, there are ways to tighten this up. It would be interesting to see what happens if they do that. If the effect remains, it still won't be Global Consciousness, but it would seem to be anomalous in some way.

Linda

What is the appetite for tightening things up? Given that Bancel suggested improvements in his 2014 paper, have any been implemented?

https://www.researchgate.net/publication...SS_PROJECT


Quote:A strong criticism  of the GCP is its reliance on an open-ended  protocol for deciding event parameters and this should be replaced with an algorithmic procedure in any future version of the experiment



Quote:In  practical  terms,  perhaps  the  most important consequence  of  the  analyses  is that  the  GCP effect may indeed  be  subject  to  signal-to-noise averaging.  If  this  is  so,  the  effect  can  be  studied  with  far  greater  statistical  power  by  increasing  the number of nodes in the network. A ten-fold increase in the number of RNGs would allow a full replication within 2 to 4 years. Augmenting the network 100-fold would allow  for the detection of single events in real  time.  The  detail  and  power  provided  by   vastly   increased  data  rates  would  also permit  the development of analyses to test models... 

Chris

I'm not sure whether there's any realistic possibility of getting the discussion back on track, but perhaps it's worth reposting the description of the project from my initial post.

(2017-09-12, 01:51 AM)Chris Wrote: [ -> ]It began as a kind of sequel to the microPK experiments conducted at the PEAR lab at Princeton. It consists of a worldwide network of several dozen random number generators. Essentially the idea behind it was that at the time of significant events - typically, events that engaged the attention of the whole world - the random number generators would exhibit unusual behaviour. Different measures of unusual behaviour were used at different times, but the commonest signified that the numbers produced by the different generators would tend to correlate with one another.

The network still exists, and continues to generate numbers. It has a Facebook page, where the latest post examines its response to Hurricane Irma:
https://www.facebook.com/EGGproject/
 
But for evidential purposes, the significant data are those produced by the "Registry of Formal Hypotheses and Specifications". According to the organisers of the project, for each of a sequence of 513 events in the period 1998-2015, a statistical hypothesis was specified before the data were examined, and was then tested. In subsequent analysis about a dozen of these events were excluded because the hypotheses were poorly defined, or not defined before any of the data were seen, but for the 500 classified as "rigorously defined", the cumulative Z value was 7.31, corresponding to a p value of 1.333 x 10^-13.
http://global-mind.org/results.html

As far as I'm aware, that result remains totally unexplained by sceptics. The hypotheses were stated to be pre-specified - that is, specified before the data were examined. The specification wasn't just a vague hypothesis - it was a specific statistical test that would yield a definite Z value for the event. And it was stated that all the pre-specified events would be included, so there would be no "publication bias" in the results.

Perhaps it's also worth reminding people that Peter Bancel's interpretation of what was going on changed quite radically over time. His most recent conclusions - which were published after the conclusion of the "Formal Hypothesis" series - can be read here:
https://www.researchgate.net/publication...xploration
And the accompanying reference material is here:
https://www.researchgate.net/publication...al_details
(2019-01-21, 09:40 AM)Chris Wrote: [ -> ]I'm not sure whether there's any realistic possibility of getting the discussion back on track

Perhaps the discussion is now forked. Perhaps we're being led down the garden path. But we can at least try to maintain clarity.

There is obviously flexibility in the event selection, including which events to choose, their start and end points, and the method of statistical analysis to use. But, assuming that "using" this flexibility amounts to "increasing the effect size", and given that the researchers say that they stipulated event parameters (start and end points, and statistical analysis to perform) before looking at the data, then I can't see how any (conventional - psi-based is another story) "use" could be made out of this flexibility unless either:
  1. The researchers are lying, and really they peeked at the data before defining event parameters.
  2. As the experiment progressed and events were selected, the researchers learnt which events were most likely to produce an effect, and tended over time to more effectively choose successful event parameters.
The problem with #1 is obvious, and needn't be elaborated on.

When it comes to #2, though, this would imply that there exists in the first place some underlying causal mechanism whose effects can be "learnt", which, rather than defeating the Global Consciousness hypothesis, would tend to confirm it, given that, as I've outlined in previous posts, the mechanism would have to be anomalous (basically, because unsynchronised XOR masks preclude a non-anomalous mechanism).

Of course, Peter Bancel's findings of themselves challenge the Global Consciousness hypothesis, but that's a separate consideration.

Chris

(2019-01-21, 11:05 AM)Laird Wrote: [ -> ]Perhaps the discussion is now forked. Perhaps we're being led down the garden path. But we can at least try to maintain clarity.

There is obviously flexibility in the event selection, including which events to choose, their start and end points, and the method of statistical analysis to use. But, assuming that "using" this flexibility amounts to "increasing the effect size", and given that the researchers say that they stipulated event parameters (start and end points, and statistical analysis to perform) before looking at the data, then I can't see how any "use" could be made out of this flexibility unless either:
  1. The researchers are lying, and really they peeked at the data before defining event parameters.
  2. As the experiment progressed and events were selected, the researchers learnt which events were most likely to produce an effect, and tended over time to (continue to / more) effectively choose successful event parameters.
The problem with #1 is obvious, and needn't be elaborated on.

When it comes to #2, though, this would imply that there exists in the first place some underlying causal mechanism whose effects can be "learnt", which, rather than defeating the Global Consciousness hypothesis, would tend to confirm it, given that, as I've outlined in previous posts, the mechanism would have to be anomalous (basically, because unsynchronised XOR masks preclude a non-anomalous mechanism).

Of course, Peter Bancel's findings of themselves challenge the Global Consciousness hypothesis, but that's a separate consideration.

I think probably it would be useful to draw a distinction between:
(1) The question of what conventional (non-psi) explanations there may be for the data
(2) The question of what kind of psi processes are implied, if the explanation isn't conventional.
(2019-01-21, 11:39 AM)Chris Wrote: [ -> ]I think probably it would be useful to draw a distinction between:
(1) The question of what conventional (non-psi) explanations there may be for the data

Can you think of any other than experimenter fraud/deception?

(2019-01-21, 11:39 AM)Chris Wrote: [ -> ](2) The question of what kind of psi processes are implied, if the explanation isn't conventional.

Yes, which somewhat echoes malf's request a page or more back for participants in this thread to state their preferred hypothesis. It's a good question/request, though not a straightforward one - I think it invites discussion and further questions rather than firm conclusions, and I hope, after reviewing my notes, to be able to contribute some thoughts of my own.
(2019-01-21, 03:07 AM)malf Wrote: [ -> ]What is the appetite for tightening things up? Given that Bancel suggested improvements in his 2014 paper, have any been implemented?

https://www.researchgate.net/publication...SS_PROJECT

That's a good question. I haven't come across any suggestions of tightening in my readings, but I could have missed this. He does not actually mention any suggestions for tightening in his later 2016 paper (I added a link to the paper in my original post to make it clear which I was referring to). He admits (in his 2014 paper) that he can't distinguish between anomalous and non-anomalous selection by looking at the data (his Goal-Oriented model and Gelman's and Loken's researcher flexibility without fishing will look the same), so he is depending only upon the public description of the selection procedure to distinguish between them. He does rule out a simple selection method, but this is different from the selection under researcher flexibility anyways. And the "tightening" you refer to in his 2014 paper is designed to distinguish between Experimenter PK vs. Global Consciousness and/or elucidation of the details of this anomalous effect. It was not designed to rule out the effects of researcher flexibility, although the algorithmic selection process would help in this regard.

I suspect that we are going to be left with the state we frequently find ourselves in - supporters are satisfied with the production of a "statistically significant" outcome and want to move on from that, while non-supporters want to see the experiments performed in the absence of opportunities for bias/flexibility, before moving on. There seems to be little interest in doing so from both sides (supporters don't think it's necessary, and non-supporters think it will turn out to be a waste of time).

Linda
(2019-01-21, 12:13 PM)fls Wrote: [ -> ]opportunities for bias/flexibility

As for how these are supposed to account for the results is left totally unspecified by you (and anybody else, including malf), which indeed does leave us in "the state we frequently find ourselves in": opponents making vague claims which they refuse to substantiate.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31