The Global Consciousness Project
350 Replies, 49163 Views
(2019-01-19, 07:35 PM)Chris Wrote: Seriously, malf. You really can't expect people to go to the trouble of answering your questions when you waste so much of their time by behaving as you have today. Part of the problem here is when I said ‘event selection’ you thought I meant ‘timing selection’ or ‘duration selection’ or something. In fact I meant ‘event selection’... My (ignored) questions: 1. What do you think is going on? Bancel’s ‘experimenter psi’, Radin’s ‘love’ affecting distant arbitrary electronic devices, or some sort of ‘p-hacking’? 2. Does anyone know if the protocols and hypothesis analysis have been through any sort of peer review? 3. Who chooses the ‘event’? Only if you’ve enough time, obv.
malf
Just to be clear - I am not going to respond to your posts from now on. I said it last October, and I was a fool to go back on it. (2019-01-16, 09:42 AM)Chris Wrote: Jeffrey Mishlove has a one-hour interview with Roger Nelson about the Global Consciousness Project in his "New Thinking Allowed" series: In case anyone is interested, the 1989 paper by Dean Radin mentioned by Mishlove, in which he claimed to have trained a neural network to recognise the "signatures" of difference experimental subjects - using data from random number generator studies done in the PEAR lab - can be found here: http://deanradin.com/articles/1989%20neu...rk%201.pdf (2019-01-19, 12:53 PM)Max_B Wrote: Ian and Laird had already decided to defray their past expenses with the first donations, I thought that was an unwise decision for themselves, as that was their original stake in the site. (2019-01-19, 12:53 PM)Max_B Wrote: when I later saw Malf and Linda’s contributions, I realised Laird might feel a little constrained by the donations. (2019-01-19, 12:53 PM)Max_B Wrote: clearly something had motivated such a long post. Have responded to these three quotes in the "Donations are now possible" thread.
Linda, I don't think there's value for us in further arguing back-and-forth, so I'll leave it at that.
(2019-01-17, 05:41 PM)malf Wrote: In terms of competing hypotheses, one must be that, once presented with colossal amounts of noisy data, mathematics can always produce some statistical significance. I've been thinking about this in relation to Gelman's and Loken's "Garden of Forking Paths". http://www.stat.columbia.edu/~gelman/res...acking.pdf They refer to 4 different scenarios: 1. There is no flexibility. 2. There is flexibility, but no use can be made of it because of pre-registration. 3. There is flexibility and use can be made of it, but there is no fishing. 4. There is flexibility and there is fishing. They focus on #3 reported as though it is #2. The assumption is that there will be an inflation of falsely significant results under #3, but not under #2. This parallels the claim made by Nelson, et. al. This leads to two questions with respect to the GCP: Are they acting under #2 vs. #3? Is there no difference in the number of falsely significant results under #2 vs. #1? Bancel's and other analysis shows us that there is a difference between the results you would obtain under #1 and the results of the GCP. That is, there is a difference in the results obtained when flexibility is present and when it is absent. For example, while there was flexibility at the beginning of the process in how the New Year's Eve data would be blocked and analyzed, for a number of years there has been no flexibility, as it is necessarily analyzed the same way each time. And the cumulative results are definitively non-significant (http://noosphere.princeton.edu/events/newyear.2015.html). Bancel (https://www.researchgate.net/publication...xploration) also showed that the choices made in the presence of flexibility, but without the purported ability to use that flexibility (i.e. "formal hypotheses"), among choices of which identical events to include, test statistics, and blocking intervals gave a different result than that expected if flexibility truly could not be used. And he also demonstrated a correlation between the measured correlations and the timestamp errors, whereas there should be no correlation in the setting of random error unless there is a process which selects for fortuitous timestamp errors. So either they are really acting under #3, or there is an interesting phenomenon where there are differences between #2 vs. #1. Probably most non-proponents just assume that they are acting under #3, given that there isn't anything to prevent it. As you mention in a later post, there are ways to tighten this up. It would be interesting to see what happens if they do that. If the effect remains, it still won't be Global Consciousness, but it would seem to be anomalous in some way. Linda (2019-01-21, 12:55 AM)fls Wrote: What is the appetite for tightening things up? Given that Bancel suggested improvements in his 2014 paper, have any been implemented? https://www.researchgate.net/publication...SS_PROJECT Quote:A strong criticism of the GCP is its reliance on an open-ended protocol for deciding event parameters and this should be replaced with an algorithmic procedure in any future version of the experiment Quote:In practical terms, perhaps the most important consequence of the analyses is that the GCP effect may indeed be subject to signal-to-noise averaging. If this is so, the effect can be studied with far greater statistical power by increasing the number of nodes in the network. A ten-fold increase in the number of RNGs would allow a full replication within 2 to 4 years. Augmenting the network 100-fold would allow for the detection of single events in real time. The detail and power provided by vastly increased data rates would also permit the development of analyses to test models...
I'm not sure whether there's any realistic possibility of getting the discussion back on track, but perhaps it's worth reposting the description of the project from my initial post.
(2017-09-12, 01:51 AM)Chris Wrote: It began as a kind of sequel to the microPK experiments conducted at the PEAR lab at Princeton. It consists of a worldwide network of several dozen random number generators. Essentially the idea behind it was that at the time of significant events - typically, events that engaged the attention of the whole world - the random number generators would exhibit unusual behaviour. Different measures of unusual behaviour were used at different times, but the commonest signified that the numbers produced by the different generators would tend to correlate with one another. Perhaps it's also worth reminding people that Peter Bancel's interpretation of what was going on changed quite radically over time. His most recent conclusions - which were published after the conclusion of the "Formal Hypothesis" series - can be read here: https://www.researchgate.net/publication...xploration And the accompanying reference material is here: https://www.researchgate.net/publication...al_details |
« Next Oldest | Next Newest »
|
Users browsing this thread: 7 Guest(s)