The Global Consciousness Project

350 Replies, 41301 Views

(2019-01-25, 04:24 PM)fls Wrote: I don't think Nelson actually says that the data wasn't sometimes looked at informally, only that the outcome statistic and timestamps were specified prior to close examination, for those events which were analyzed...It doesn't mean that some of the researchers aren't looking through the data displays prior to analysis, when wondering about which events may be of interest or trying out various other ideas. 
I noticed on the blog that there was reference to "probes", where momentary samples are taken during long running events. The given caveat states that it may need to be informal, which suggests that there are times when it may not. Regardless, it confirms that the researchers are looking at the data outside of just the formal specified events/statistics.
http://teilhard.global-mind.org/updates.html

E.g. of probes used in formal database:
http://teilhard.global-mind.org/syrian.tragedy.html
http://global-mind.org/astro.III.1.html

Linda
(This post was last modified: 2019-01-27, 04:13 PM by fls.)
Obviously, a distinction needs to be drawn between the "Formal Hypothesis Registry", which is stated to be prespecified, and other analyses. The informal "probes" are stated to be "not included in the long-running formal replication series", which I take to mean the events in the  "Formal Hypothesis Registry".

(Edit: That is what's said about informal "probes". Possibly the word "probe" may be used in different senses.)

On the other hand, if there are discrepancies between the current version of the Registry and the details given in archived web pages, then that would be a cause for concern.
(2019-01-27, 04:10 PM)Chris Wrote: On the other hand, if there are discrepancies between the current version of the Registry and the details given in archived web pages, then that would be a cause for concern.

At first sight there do seem to be some discrepancies in the quoted p values, so I'd like to have a careful look at that when time permits.
(2019-01-27, 03:42 PM)fls Wrote: Archived web shot from Nov. 1999. Current webpages only go back to 2015 because that is when everything was moved to a new server.

https://web.archive.org/web/199910130325...ceton.edu/
https://web.archive.org/web/199910130325...ceton.edu/

Thank you. That seems to confirm that the live data was viewable online from close to the start of the formal experiment, if not before.

That's not to say though that the experimenters are lying when they say that they didn't look at the data before specifying hypotheses. A claim like that would take strong evidence when all we have from this is possibility - and a remote possibility at that: it's hard to imagine researchers taking turns to monitor the live feed 24x7 and note down apparently significantly deviating periods to include later in the formal event register. Too, there's the observation that you yourself have made that plenty of the events are not even significant, or deviate in the opposite than predicted direction: if the experimenters were monitoring for significant events only, then they did a poor job of it.

I think we can all agree though that a tighter experimental protocol intended to be "skeptic-proof" would eliminate possibilities for data peeking like this.
[-] The following 1 user Likes Laird's post:
  • malf
If people are interested in finding out more about how the hypotheses were arrived at, they can look at the three pages with "Events" in their titles linked from here, which go up to 2003:
http://noosphere.princeton.edu/predictions.html
[-] The following 1 user Likes Guest's post:
  • Laird
(2019-01-27, 06:25 PM)Chris Wrote: At first sight there do seem to be some discrepancies in the quoted p values, so I'd like to have a careful look at that when time permits.

There were significant differences between the p values originally calculated and the p values now shown for 9 of the early events. I contacted Roger Nelson to ask if he knew the reason, and he said he thought this would have been because additional data had been received from some random number generators after the original calculations were done.
[-] The following 1 user Likes Guest's post:
  • Laird
(2017-09-15, 06:49 PM)Chris Wrote: If I understand correctly, the process is as follows, in general terms:
(1) The noise is used to generate a stream of bits.
(2) An XOR mask is applied to remove bias.
(3) For each second, the first 200 bits are extracted and added up (which if the RNGs were behaving ideally would produce a binomially distributed random variable with mean 100 and variance 50), and that is what initially goes into the database.
(4) Periodically, the values in the database are renormalised based on the long-term measured variance for each device, to try to make the variance equal to the ideal value (I'm not sure this additional processing is necessarily a good thing. Maybe it would be better to keep the values produced by step (3), and to bear in mind when analysing them that the variance may depart slightly from the ideal value.)

Roger Nelson also looked at some of the discussion here, and pointed out that number (4) in this description was based on a misunderstanding. In fact the values in the database aren't changed, but when the test statistics are calculated a renormalisation is used, based on the long-term average variance for each device. So the values in the database simply represent the summed raw output of the random number generators.
[-] The following 1 user Likes Guest's post:
  • Laird
(2019-01-26, 12:25 PM)Chris Wrote: Here's a short but quite interesting presentation by Roger Nelson from the Society for Scientific Exploration Conference in 2016 - ten minutes' talk followed by questions. It includes a figure that I don't remember seeing in the published papers, showing an analysis by Peter Bancel concluding that about two thirds of the events showed a positive effect, one sixth no effect, and the other one sixth a "true negative" effect. It would be interesting to see more details of that.

Roger Nelson kindly identified the source of this figure. It's actually from quite an early analysis by Bancel and Nelson, published in the Journal of Scientific Exploration in 2008:
http://global-mind.org/papers/pdf/GCP.JSE.B&N.2008.pdf

The authors modelled the frequency distribution of z scores for events as the sum of three normal distributions. The parameters that gave the best fit were:
(1) Positive: 67%, mean 0.56
(2) Null: 16%
(3) Negative: 17%, mean -0.49.
These combined to give the measured overall effect size of 0.30.
Roger Nelson also kindly sent me a link to a short paper by him entitled "Evoked Potentials and GCP Event Data", which draws a parallel between the transient variations of electrical potential in the body following stimuli and the time-varying correlations during events in the GCP data. The idea is that this may suggest an alternative interpretation of the  results found by Peter Bancel, which he interpreted as evidence of a psi-mediated experimenter effect related to the selection of the start and end points of the events:
http://global-mind.org/papers/pdf/event....ntials.pdf
I've been thinking about this a bit more, and it seems to me that the graphs produced by Peter Bancel - showing the averaged time-variation of the data before, during and after events - aren't really the clinching evidence for a psi-mediated selection mechanism that they might appear.

Here are the graphs:
[Image: SelectionGraphs.jpg]

Both graphs show the time-varying cumulative correlation data. The one on the left is for events where there was flexibility in the choice of start and end times (the events have all been stretched to a duration of 24 hours before averaging), while the one on the right is for 24-hour events starting and ending at midnight, where there was no such flexibility. The key point is that the graph on the left shows that the increase in the cumulative signal during the event is almost exactly cancelled out by decreases before and after the event. But the graph on the right shows no systematic change before or after the event.

The interpretation is that for the events with flexibility, psi has allowed the experimenter to choose start and end points so as to produce a positive effect within each event, at the cost of matching negative contributions before and after. The signal itself remains an ideal random walk, but a favourable period of time has been selected. For the 24-hour events this cannot happen. Instead, presumably, psi has allowed the experimenter to choose favourable days for 24-hour events, and to avoid unfavourable days.

One question that might be asked is why there shouldn't also be an element of choosing whether to define events on favourable days in the left-hand graph, as well as just choosing favourable start and end points. If there were such an element, the negative contributions before and after the event would only partially cancel out the positive contribution from the event itself. But as the graph shows, this didn't happen.

Another question is whether we can actually produce a graph that looks like the left-hand one by defining suitable rules for the selection of start and end points. I doubt we can. Perhaps the simplest rule would be to specify a start period and an end period for each event, and to select start and end points according to the location of the minimum and maximum respectively of the cumulative curve during these periods. I reckon that would produce something of the form below, which doesn't look much like the experimental data:
[Image: SelectionGraphTheoretical.jpg]
There's also a more quantitative objection to the psi-mediated selection mechanism. In the simple model just suggested, the selection within the start and end periods produces a contribution to the cumulative statistic that is proportional to the square roots of the durations of those start and end periods. That's because of the fundamental fact that the deviation of a random walk scales like the square root of time. So the same square-root scaling will also apply to more sophisticated models of selection based on the behaviour of the signal within fixed start and end periods.

That means we can estimate how the size of the effect, expressed as a Z value per event, should vary with the duration of the event (call it N). That depends on whether the start and end periods have fixed lengths, or whether they grow in proportion to the duration of the event. If they have fixed lengths, Z is inversely proportional to the square root of N. If they grow in proportion to the duration, Z is independent of N. Those cases are in contrast to a field-type effect, for which Z would be proportional to the square root of N.

It so happens that in 2015 (in "Evidence for Psi", edited by Broderick and Goertzel) Bancel tested the dependence of Z on N for events whose duration was 12 hours or less (thus, fortuitously, excluding the 24-hour events for which he now believes the mechanism of psi-mediated selection is different). He called this a signal-to-noise test. The result was that he rejected what he then described as the "data selection hypothesis", in which Z was independent of N. The result of the statistical test would translate to a (one-tailed) p value of .0037. That rejected hypothesis would be equivalent to the start and end periods growing in proportion to the duration of the event. The alternative, where the start and end periods had fixed lengths, would have been even more strongly rejected. (The 2015 paper was based on 426 events, and therefore represented about 85% of the complete series.)

Peter Bancel's conclusion at that time was that both a simple selection hypothesis and a straightforward global consciousness field hypothesis had to be rejected:
"The analysis of data structure rejects the simple selection hypothesis at a reasonably high level of confidence. The signal-to-noise analysis provides the most clearcut support for this conclusion. ...Tests for a loophole to circumvent the XOR no-go suggest that a straightforward conception of proto-psi global consciousness is also not tenable. ...
The analyses, then, provide good arguments for rejecting both simple models and we are forced to look elsewhere for an explanation."
https://books.google.com/books?id=KVyQBQ...&lpg=PA274
[-] The following 1 user Likes Guest's post:
  • Laird

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)