Psience Quest

Full Version: The Global Consciousness Project
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
(2019-01-27, 09:17 AM)Chris Wrote: [ -> ]Radin is noted as an originator for only 4 out of the 513 hypotheses, 3 of them jointly with Nelson and others.

I think you might have missed a fifth, event #485 - it uses only his first name which perhaps you missed by searching on his surname alone?

But yes, that was one gross distortion among many in that earlier post. I'm glad you corrected it. To correct another: perhaps it has been forgotten that the website offered as proof that real time data is available was demonstrated in an earlier post to have gone live at the earliest only near the very end of the formal experiment, and could at most have been used to "live peek" at the data for the final 7 of the 513 events. An invitation was extended to point out where real time data might have been available elsewhere, however that invitation has so far been ignored.

Chris

(2019-01-27, 12:47 PM)Laird Wrote: [ -> ]I think you might have missed a fifth, event #485 - it uses only his first name which perhaps you missed by searching on his surname alone?

But yes, that was one gross distortion among many in that earlier post. I'm glad you corrected it. To correct another: perhaps it has been forgotten that the website offered as proof that real time data is available was demonstrated in an earlier post to have gone live at the earliest only near the very end of the formal experiment, and could at most have been used to "live peek" at the data for the final 7 of the 513 events. An invitation was extended to point out where real time data might have been available elsewhere, however that invitation has so far been ignored.

Thanks. That's well spotted. So there is that one additional event in which Dean (presumably Dean Radin) was involved, which has to be added to the list.

As you say, a lot of gross distortion has been posted here. Or at least a lot of comments that give a grossly distorted impression, even though the words may have been carefully chosen to avoid literal falsehoods.

Another comment gives the impression that the protocol allowed for the data to be looked at before the hypothesis was fixed, provided that it was done "informally" rather than constituting a "close examination".

On the contrary, it has been clearly stated many times that the protocol did not allow for any examination of the data at all. Just the most recent such statement was in the interview I posted a link to here the week before last, where exactly this question was raised (at about 40m):
https://www.youtube.com/watch?v=wZYeHPk7_kU

Mishlove: "I would imagine at this point as you're just starting out to analyse the data, that statisticians and mathematicians will say, well there are many different approaches you could have taken. and you'll likely be accused of selecting post hoc the approach that worked the best."

Nelson:"Right. No, we knew those kinds of questions and in fact we had our own questions about what would be a suitable kind of statistical test and we eventually settled on one primary test and then a couple of others that we might use for special purposes. But all of the data from the beginning even when we were in what we call the pilot - should properly call the pilot phase - the experiment required that we register all the parameters - the beginning of the period of time, the end of the period of time, and the statistical test that would be used - all of that had to be registered before the data were examined in any way. So it's been - that's a blessing, because you can then rely on the interpretation of your statistical outcomes."

Of course people can decide to believe they didn't do this - and we know that in some cases events had to be excluded because the protocol wasn't observed. But there's no doubt that, according to the protocol, the hypotheses were supposed to be fixed before the data were examined in any way.
Archived web shot from Nov. 1999. Current webpages only go back to 2015 because that is when everything was moved to a new server.

https://web.archive.org/web/199910130325...ceton.edu/
https://web.archive.org/web/199910130325...ceton.edu/
(2019-01-25, 04:24 PM)fls Wrote: [ -> ]I don't think Nelson actually says that the data wasn't sometimes looked at informally, only that the outcome statistic and timestamps were specified prior to close examination, for those events which were analyzed...It doesn't mean that some of the researchers aren't looking through the data displays prior to analysis, when wondering about which events may be of interest or trying out various other ideas. 
I noticed on the blog that there was reference to "probes", where momentary samples are taken during long running events. The given caveat states that it may need to be informal, which suggests that there are times when it may not. Regardless, it confirms that the researchers are looking at the data outside of just the formal specified events/statistics.
http://teilhard.global-mind.org/updates.html

E.g. of probes used in formal database:
http://teilhard.global-mind.org/syrian.tragedy.html
http://global-mind.org/astro.III.1.html

Linda

Chris

Obviously, a distinction needs to be drawn between the "Formal Hypothesis Registry", which is stated to be prespecified, and other analyses. The informal "probes" are stated to be "not included in the long-running formal replication series", which I take to mean the events in the  "Formal Hypothesis Registry".

(Edit: That is what's said about informal "probes". Possibly the word "probe" may be used in different senses.)

On the other hand, if there are discrepancies between the current version of the Registry and the details given in archived web pages, then that would be a cause for concern.

Chris

(2019-01-27, 04:10 PM)Chris Wrote: [ -> ]On the other hand, if there are discrepancies between the current version of the Registry and the details given in archived web pages, then that would be a cause for concern.

At first sight there do seem to be some discrepancies in the quoted p values, so I'd like to have a careful look at that when time permits.
(2019-01-27, 03:42 PM)fls Wrote: [ -> ]Archived web shot from Nov. 1999. Current webpages only go back to 2015 because that is when everything was moved to a new server.

https://web.archive.org/web/199910130325...ceton.edu/
https://web.archive.org/web/199910130325...ceton.edu/

Thank you. That seems to confirm that the live data was viewable online from close to the start of the formal experiment, if not before.

That's not to say though that the experimenters are lying when they say that they didn't look at the data before specifying hypotheses. A claim like that would take strong evidence when all we have from this is possibility - and a remote possibility at that: it's hard to imagine researchers taking turns to monitor the live feed 24x7 and note down apparently significantly deviating periods to include later in the formal event register. Too, there's the observation that you yourself have made that plenty of the events are not even significant, or deviate in the opposite than predicted direction: if the experimenters were monitoring for significant events only, then they did a poor job of it.

I think we can all agree though that a tighter experimental protocol intended to be "skeptic-proof" would eliminate possibilities for data peeking like this.

Chris

If people are interested in finding out more about how the hypotheses were arrived at, they can look at the three pages with "Events" in their titles linked from here, which go up to 2003:
http://noosphere.princeton.edu/predictions.html

Chris

(2019-01-27, 06:25 PM)Chris Wrote: [ -> ]At first sight there do seem to be some discrepancies in the quoted p values, so I'd like to have a careful look at that when time permits.

There were significant differences between the p values originally calculated and the p values now shown for 9 of the early events. I contacted Roger Nelson to ask if he knew the reason, and he said he thought this would have been because additional data had been received from some random number generators after the original calculations were done.

Chris

(2017-09-15, 06:49 PM)Chris Wrote: [ -> ]If I understand correctly, the process is as follows, in general terms:
(1) The noise is used to generate a stream of bits.
(2) An XOR mask is applied to remove bias.
(3) For each second, the first 200 bits are extracted and added up (which if the RNGs were behaving ideally would produce a binomially distributed random variable with mean 100 and variance 50), and that is what initially goes into the database.
(4) Periodically, the values in the database are renormalised based on the long-term measured variance for each device, to try to make the variance equal to the ideal value (I'm not sure this additional processing is necessarily a good thing. Maybe it would be better to keep the values produced by step (3), and to bear in mind when analysing them that the variance may depart slightly from the ideal value.)

Roger Nelson also looked at some of the discussion here, and pointed out that number (4) in this description was based on a misunderstanding. In fact the values in the database aren't changed, but when the test statistics are calculated a renormalisation is used, based on the long-term average variance for each device. So the values in the database simply represent the summed raw output of the random number generators.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31