Psience Quest

Full Version: The Global Consciousness Project
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
(2018-10-18, 04:06 PM)fls Wrote: [ -> ]there are a plethora of alternative hypotheses left to choose from, most of which are not anomalous.

Which non-anomalous hypotheses do you think we are left to choose from, and of those, which do you think are most plausible?



(2018-10-18, 08:15 PM)Steve001 Wrote: [ -> ]The GCP is a giant widget detector attempting to demonstrate human collective will can influence classical phenomena. The problem arises because there's no evidence will can influence classical reality. Now you might balk at that and point out The Copenhagen Interpretation of QM theory. But that is one interpretation, there are other interpretations of equal validity. This giant widget detector is an extension of the PEAR lab. During the entire history not one independent lab was able to replicate PEARS's claimed positive results.  Is that answer satisfying?

In context, I'm going to interpret that answer as: "There must have been an error in the calculation of the overall Z-score/p-value, because a deviation so far from chance expectation in an experiment of this nature is impossible given the way I (Steve001) understand the world to work based on that preexisting evidence which I accept".

Fair enough?

Re successful PEAR replications, Dean Radin and Roger Nelson back in 1989 were able to find plenty - and their meta-analysis of all similar studies (both successful and unsuccessful) found an overall consistent effect.

Re collective will influencing classical phenomena - I think for this project it's more a matter of quantum phenomena than classical phenomena, don't you?



(2018-10-19, 09:34 PM)malf Wrote: [ -> ]From where do you get the claim "over seven standard deviations from the norm" ?

From the project's results page - Chris referenced it in the very first post to this thread (emphasis added by me):

(2017-09-12, 01:51 AM)Chris Wrote: [ -> ]In subsequent analysis about a dozen of these events were excluded because the hypotheses were poorly defined, or not defined before any of the data were seen, but for the 500 classified as "rigorously defined", the cumulative Z value was 7.31, corresponding to a p value of 1.333 x 10^-13.
http://global-mind.org/results.html
(2018-10-20, 07:37 AM)Laird Wrote: [ -> ]Which non-anomalous hypotheses do you think we are left to choose from, and of those, which do you think are most plausible?

I wouldn't be able to guess at all of them - any/all of the various ways in which the EGGs produce data which doesn't follow a theoretical distribution, including errors and biases:

http://noosphere.princeton.edu/errors.html
http://noosphere.princeton.edu/gcpdata.html
http://noosphere.princeton.edu/xor.html

And environmental processes:

http://noosphere.princeton.edu/longterm.sunspot.html
http://noosphere.princeton.edu/reg.html ("temperature change, electromagnetic fields")

(Not an inclusive list.)

Assuming the selection process happens to occasionally capture a real effect, there's no test at all of what that may consist of - geomagnetic fields, electro-communications traffic, time of day, location of galactic center, pirate activity, etc.

The most plausible, unfortunately, is the ease with which the selection process can be post hoc. Given that the data is collected before the hypotheses are registered, and that there is an ongoing graphical representation of the data which conveniently shows you where the deviations are, it is far too easy to select your events of interest post hoc (https://www.heartmath.org/gci/gcms/live-...s-project/). However, what speaks against this is how unremarkable the results are. Almost every test of the idea is negative. There are a handful of "positive" tests, where the variance was significantly correlated - 42 out of 513 on my rough count. But you'd expect 51 "positive" tests due to chance anyways, so I'm not sure what we're supposed to make of that.

However, even if we buy that the adjustments to the EGG data removes all bias and does not introduce any bias, that the hypotheses are highly specific when formed, and that anyone who is involved in developing and registering the hypotheses has never clicked on the HeartMath link or otherwise looked at the data, there hasn't been a positive test of what might be responsible for the small positive bias in the selected samples. In particular, there hasn't been a positive test of whether or not "global consciousness" correlates with this bias (as mentioned by malf back in post 150 https://psiencequest.net/forums/thread-t...2#pid21732). All we can say is that a capricious ad hoc selection process has managed to select a slightly biased sample. 

Linda
Quote:Laird: Which non-anomalous hypotheses do you think we are left to choose from, and of those, which do you think are most plausible?

Linda: I wouldn't be able to guess at all of them - any/all of the various ways in which the EGGs produce data which doesn't follow a theoretical distribution, including errors and biases

But the RNGs produce data which does[1] follow a theoretical distribution, which - as the very links you shared demonstrate[2] - is as expected given both their design[3] and the experiment's quality control precautions[4].

A non-anomalous hypothesis then has to explain not just why and how the proposed cause increases RNG network variance[5] during formally-specified events, but also why this cause does not during the same periods affect the output of individual RNGs, which remains at chance expectation.

It also needs to take into account that the project hypothesised, and that its event selection process was predicated on, the existence of a correlation between major world events and RNG network variance[6], and that the project's results are significantly consistent with this correlation.

Since we are exploring non-anomalous explanations, we cannot dismiss this successful selection process and the resulting correlation by substituting, as does Peter Bancel, an alternative explanation of successful event selection based on a goal-oriented model, an experimenter effect, or decision augmentation theory - and to dismiss it as simply spurious and random would be to essentially take the position that you take here:

(2018-10-20, 08:20 PM)fls Wrote: [ -> ]All we can say is that a capricious ad hoc selection process has managed to select a slightly biased sample.

In this case, we would have to explain how that capricious, ad hoc selection process managed to select a slightly biased sample whose overall level of statistical significance resampling demonstrates[7] could be achieved only about one time in 100,000 capricious, ad hoc attempts. Too, that was back in 2008, when the set of events numbered only 212. With the present database of circa 500 events, I would expect that the number of capricious, ad hoc samples required to achieve the present level of statistical significance is much, much higher. Are you aware of a plausible non-anomalous explanation for this?

Without being able to dismiss that correlation, and given that it appears to be causal in nature due to its high level of statistical significance, we have a causal scenario in which not only does the hypothesised non-anomalous cause affect RNG network variance, but (the occurrence of) major world events somehow either (directly or via one or more intermediaries) affects or is affected by the hypothesised non-anomalous cause. This clearly complicates the requirements of the explanation.

Too, if the explanation is that the hypothesised non-anomalous cause affects (the occurrence of) major world events, then, considering that many of the formally-specified events had their times fixed in advance (e.g., New Year's Eve), this would seem to entail that the hypothesised non-anomalous cause must affect the human scheduling of those events, which seems... pretty anomalous. Arguably, then, the causal relationship would have to go in the other direction: that major world events affect the hypothesised non-anomalous cause.

Other causality-related complications include Peter Bancel's findings, amongst others, that the results for a surrogate set of unregistered major world events were non-significant even though the results for the comparable set of formally-registered major world events were very significant[8], and that the results using alternative methods of analysis for the same set of formally-registered events were non-significant whereas the results using the formally-registered methods of analysis of the events were highly significant, even though all those methods of analysis were at some point used in formally-registered events[9] - both of which on a non-anomalous hypothesis are hard to explain, but which for simplicity I have excluded from the below summary.

To sum up, a non-anomalous hypothesis has, as far as I can see, at least three (numbered in parentheses) explanatory requirements:

(1) Why and how the proposed non-anomalous cause increases RNG network variance during formally-specified events (2) without simultaneously causing the output of individual RNGs to deviate from chance expectation, and (3) why and how major world events affect or are affected by the hypothesised non-anomalous cause.

As far as I can see, none of these requirements is met by any of the examples you've provided: "geomagnetic fields, electro-communications traffic, time of day, location of galactic center, pirate activity".

For a start, we seem to be able to discard the final three of them, at least as ultimate causes: time can't - as far as I can tell - itself be a cause, only a proxy marker for a cause; for pirate activity to be an ultimate cause would clearly be anomalous; and the location of the galactic center as an ultimate cause would also appear to be anomalous, for similar reasons as those for which astrology is considered to be anomalous.

That leaves us with "geomagnetic fields" and "electro-communications traffic", which we can abstract as "electromagnetism" in general. Perhaps you can explain how electromagnetism (in general or in either of these specific cases) fulfills the three explanatory requirements, because I cannot see that or how it does. The researchers themselves say that they "have excluded reasonable mundane explanations such as electromagnetic radiation, excessive strain on the power grid, or mobile phone use".[10]

The other candidate cause which you implied by sharing a link is sunspot activity. It does, as the page to which you linked suggests, appear to be correlated with the cumulative deviation of RNG network variance. This is very intriguing, but it is far from a non-anomalous explanation in and of itself, and the researchers themselves do not seem to think that it can be developed into one. They suggest instead a possible anomalous elaboration in terms of sunspot activity affecting human psychology and emotion which in turn affect both GCP network variance and world events. Even this seems to me to be quite difficult to make sense of given the complication I mentioned above: this seems to entail that sunspot activity affects the human scheduling of predetermined events, which even for parapsychology is pretty strange.

In any case, perhaps you can explain how a non-anomalous hypothesis based in sunspot activity as a cause fulfills the three explanatory requirements, because, as for electromagnetism, I cannot see that or how it does.

(2018-10-20, 08:20 PM)fls Wrote: [ -> ]The most plausible [candidate for a real effect], unfortunately, is the ease with which the selection process can be post hoc.

(Editing note added by me).

Given that the experimenters have been pretty clear that they have not engaged in data mining, I think that this would amount to deliberate fraud. I personally can't rule that out, especially given that all I know of the experiment is what I've read about it online, but I'm assuming honesty for the sake of argument, and because at this point I have no positive evidence of fraud nor compelling reason to expect it. I think if you have any positive evidence you should share it, because what little support you have provided so far is weak:

(2018-10-20, 08:20 PM)fls Wrote: [ -> ]the data is collected before the hypotheses are registered

It seems likely that for at least a subset of the many formally-registered events which are scheduled or otherwise known in advance, such as New Year's Eve, an hypothesis is registered prior to each event's occurring. I have as-yet been unable to confirm that this is the case though, nor how the significance of the results for those events compares to that for the rest of the events. I would be interested if anybody can point me to a source of information on this.

(2018-10-20, 08:20 PM)fls Wrote: [ -> ]there is an ongoing graphical representation of the data which conveniently shows you where the deviations are, it is far too easy to select your events of interest post hoc (https://www.heartmath.org/gci/gcms/live-...s-project/).

However, given the earliest possible date at which data became available via that web page, at most only seven of the 513 formally-registered events could have been monitored live.[11] If the live data was available elsewhere prior to that, then perhaps you can point out where.

In any case, given that events have been excluded from the experiment due to inadvertent data peeking[12], data peeking seems an unlikely explanation.

(2018-10-20, 08:20 PM)fls Wrote: [ -> ]There are a handful of "positive" tests, where the variance was significantly correlated - 42 out of 513 on my rough count. But you'd expect 51 "positive" tests due to chance anyways, so I'm not sure what we're supposed to make of that.

51? Isn't that double what we'd actually expect? Wouldn't we expect 0.05 x 513 events, not 0.1 x 513 events?

And, unless I've miscalculated, the one-tailed p-value by the exact binomial test for 42 successes out of 513 trials at a probability of success of 0.05 is 0.001409, which seems not so incongruous.

We can also try another grouping: by my count, 314 out of the 513 events have positive Z-scores whereas chance expectation is only 256.5 such events. And, unless I've miscalculated, the one-tailed p-value for this by the exact binomial test is 2.169 x 10-07. Again, not so incongruous.

(2018-10-19, 03:17 PM)fls Wrote: [ -> ]I suspect that [the fact that Steve001 and the other skeptics are still here is] more for appearances [sic] sake than an interest in what they have to say, though.

(Editing notes added by me).

I am potentially interested in what skeptics have to say on individual experiments, because, given their general disbelief in anomalous phenomena, they have a motivation to uncover flaws. Informed and careful criticism can be fascinating. Uninformed or specious criticism is... less interesting.

Also less interesting in general when black swans surround us are repetitive claims that all swans are white. And even less so when manipulation and spin are employed. That said, it can sometimes be interesting (or tempting) to debate/discuss these sort of claims with skeptics anyway, partly to clarify issues, partly because in the clash of ideas sometimes - though rarely - genuinely interesting or novel perspectives or possibilities are raised, and partly because on a public forum it can give onlookers some useful food for thought, especially as to why proponents propone despite the skepticking of skeptics.

The idea that the continued presence of skeptics on this forum is for appearance's sake could suggest something about the appearance of the forum's skeptics... but, of course, that's not what you meant. Regardless, I think it is more the case that the continued presence of skeptics on this forum reflects hopefulness and tolerance - as well as a recognition that socially you (plural) contribute your own personalities, humour, gifts, experiences, etc to the community.

[1] See Table 2 on page 15 and the discussion surrounding it in the 2008 paper by Peter Bancel and Roger Nelson The GCP Event Experiment: Design, Analytical Methods, Results.
[2] I do not understand why you would share links which weaken rather than strengthen your claim, but somehow that is what you have done.
[3] Systemic variations from chance in individual RNGs are avoided by XOR processing, which eliminates biases potentially caused by the factors you listed (temperature change and electromagnetic fields), as described in the XOR and REG design pages which you shared.
[4] Erroneous data is actively monitored for and removed from analysis, and valid data are then corrected and normalised, as described in the Data and Known Errors pages which you shared.
[5] Which basically measures the overall correlation between RNG outputs. For details, see pages nine and ten of the paper referenced in footnote 1 above.
[6] A pedantic acknowledgement: strictly speaking, the project's hypothesis does not reference RNG network variance specifically, but that's the measure that ended up being used most in practice.
[7] As described in Appendix 3 on page 23 of the paper referenced in footnote 1 above.
[8] See "S.10 Counterfactual test 1: Undesignated surrogate events" in the Supplementary Materials for Peter Bancel's 2016 paper Searching for Global Consciousness: A Seventeen Year Exploration.
[9] See "S.11 Counterfactual test 2: Alternate test statistics" in the materials referenced in the previous footnote.
[10] See The Global Consciousness Project: a Summary.
[11] Whilst that URL itself is not archived on the Wayback Machine, it appears that the page was previously located at a slightly different URL, https://www.heartmath.org/research/globa...live-data/, which now redirects to the current URL. The earliest two archival entries for that previous URL show that live data became available (inclusively) sometime between 27 July 2015 and 5 March 2016, and only events 507 through 513 occurred after 27 July 2015.
[12] See "Rejected events" under "S.5 Test statistics" in the materials referenced in footnote 8 above.
(2018-10-24, 11:22 PM)Max_B Wrote: [ -> ]IIRC (it’s been a long time since I discussed this with Chris) the GCP uses RNG’s that are environmentally coupled. Voltage and Temperature being two well known causes of poor random performance on RNG devices similar to the GCP’s devices.

You seem to have forgotten the part of your discussion involving the implications of XOR processing, which eliminates systemic bias - including from environmental couplings like voltage and temperature - from the output, as explained, amongst other places, in the links shared by Linda (which I reproduced in my footnote 3).

(2018-10-24, 11:22 PM)Max_B Wrote: [ -> ]I would expect that the GCP are picking up environmental signals in their environmentally coupled RNG’s, so should try to test for this, and remove as much environmental coupling as possible.

The RNG outputs have been found to be consistent with chance expectation during the formally-registered events (see the reference in my footnote 1). This in turn is inconsistent with environmental signals affecting their output. Your expectations weren't met. Good thing you weren't paying!

It is (simplifying a little) only the inter-RNG correlation of outputs that increases during formally-registered events. Nobody in this thread has advanced a viable non-anomalous explanation as to why or how that could be - other than researcher fraud.

(2018-10-24, 11:22 PM)Max_B Wrote: [ -> ]Their method of selecting events is also not well documented

...and yet it clearly works. The selected events have highly significant results, whereas the results for the unselected remainder of the data do not vary significantly from chance expectation.[13]

(2018-10-24, 11:22 PM)Max_B Wrote: [ -> ]I’m very confident with my own conclusion that I can safely ignore the GCP’s results

Well... as much as I prefer a confident man to a confidence man, I just can't see any basis for your conclusion.

[13] Based on resampling, as described by Roger Nelson on page 245 of Evidence for Psi: Thirteen Empirical Research Reports:

"A similarly powerful control background can be produced by resampling the non-event data (98 percent of the database) to generate clones of the formal data series using the same parameters, but randomly offset start times for the events. Repeated resampling (bootstrap sampling with replacement) produces an empirical distribution of expected scores which is statistically indistinguishable from the random simulation. It provides a rigorous confirmation that the GCP database as a whole conforms to expected null behavior, whereas the behavior at the times of events displays a persistent deviation".
(2018-10-25, 04:15 AM)Max_B Wrote: [ -> ]Unfortunately XORing doesn’t necessarily get rid of bias... such as an environmental signal buried in the data of these noise driven  environmentally coupled RNG devices. Indeed it’s well known that XORing can increase bias in some circumstances.

Explanation or reference for that very, very dubious claim?

In any case, even if true, it would be irrelevant. As I've pointed out twice now, the data for individual RNGs have been found in practice over the course of the experiment itself to be free from bias. Will you acknowledge this? If not, I think I'm done here.
(2018-10-24, 09:19 PM)Laird Wrote: [ -> ]But the RNGs produce data which does[1] follow a theoretical distribution, which - as the very links you shared demonstrate[2] - is as expected given both their design[3] and the experiment's quality control precautions[4].

A non-anomalous hypothesis then has to explain not just why and how the proposed cause increases RNG network variance[5] during formally-specified events, but also why this cause does not during the same periods affect the output of individual RNGs, which remains at chance expectation.

It also needs to take into account that the project hypothesised, and that its event selection process was predicated on, the existence of a correlation between major world events and RNG network variance[6], and that the project's results are significantly consistent with this correlation.

Given that the bulk of the individual events also do not show statistically significant increased network variance, and there isn't an excessive number of those which do (as you pointed out, I missed counting some of the "significant" events - there were 60, not 42 (when 51 would be expected), at the 10% cut-off the researchers used (inattention on my part)), alternate explanations don't have a particularly steep hill to climb.

The claim that there is a correlation between major world events and network variance is untested. The ad hoc, capricious sample selection does not test whether there is a correlation. In order to test for a correlation, you would either have to find a way to identify all major world events for a full test, or find a way to draw a representative sample. As Bancel showed, the event sample which has been selected so far, is not representative of the entire pool of events, nor is it even remotely complete.  

Quote:Since we are exploring non-anomalous explanations, we cannot dismiss this successful selection process and the resulting correlation by substituting, as does Peter Bancel, an alternative explanation of successful event selection based on a goal-oriented model, an experimenter effect, or decision augmentation theory - and to dismiss it as simply spurious and random would be to essentially take the position that you take here:


In this case, we would have to explain how that capricious, ad hoc selection process managed to select a slightly biased sample whose overall level of statistical significance resampling demonstrates[7] could be achieved only about one time in 100,000 capricious, ad hoc attempts.

The resampling you referred to was not 100,000 capricious, ad hoc attempts. It was random sampling. I don't think anyone disagrees that the event samples are improbable under random sampling. The question is whether they would also be improbable if samples were drawn for other goals

Quote:Too, that was back in 2008, when the set of events numbered only 212. With the present database of circa 500 events, I would expect that the number of capricious, ad hoc samples required to achieve the present level of statistical significance is much, much higher. Are you aware of a plausible non-anomalous explanation for this?

Without being able to dismiss that correlation, and given that it appears to be causal in nature due to its high level of statistical significance, we have a causal scenario in which not only does the hypothesised non-anomalous cause affect RNG network variance, but (the occurrence of) major world events somehow either (directly or via one or more intermediaries) affects or is affected by the hypothesised non-anomalous cause. This clearly complicates the requirements of the explanation.

Too, if the explanation is that the hypothesised non-anomalous cause affects (the occurrence of) major world events, then, considering that many of the formally-specified events had their times fixed in advance (e.g., New Year's Eve), this would seem to entail that the hypothesised non-anomalous cause must affect the human scheduling of those events, which seems... pretty anomalous. Arguably, then, the causal relationship would have to go in the other direction: that major world events affect the hypothesised non-anomalous cause.

Other causality-related complications include Peter Bancel's findings, amongst others, that the results for a surrogate set of unregistered major world events were non-significant even though the results for the comparable set of formally-registered major world events were very significant[8], and that the results using alternative methods of analysis for the same set of formally-registered events were non-significant whereas the results using the formally-registered methods of analysis of the events were highly significant, even though all those methods of analysis were at some point used in formally-registered events[9] - both of which on a non-anomalous hypothesis are hard to explain, but which for simplicity I have excluded from the below summary.

To sum up, a non-anomalous hypothesis has, as far as I can see, at least three (numbered in parentheses) explanatory requirements:

(1) Why and how the proposed non-anomalous cause increases RNG network variance during formally-specified events (2) without simultaneously causing the output of individual RNGs to deviate from chance expectation, and (3) why and how major world events affect or are affected by the hypothesised non-anomalous cause.

As far as I can see, none of these requirements is met by any of the examples you've provided: "geomagnetic fields, electro-communications traffic, time of day, location of galactic center, pirate activity".

For a start, we seem to be able to discard the final three of them, at least as ultimate causes: time can't - as far as I can tell - itself be a cause, only a proxy marker for a cause; for pirate activity to be an ultimate cause would clearly be anomalous; and the location of the galactic center as an ultimate cause would also appear to be anomalous, for similar reasons as those for which astrology is considered to be anomalous.

That leaves us with "geomagnetic fields" and "electro-communications traffic", which we can abstract as "electromagnetism" in general. Perhaps you can explain how electromagnetism (in general or in either of these specific cases) fulfills the three explanatory requirements, because I cannot see that or how it does. The researchers themselves say that they "have excluded reasonable mundane explanations such as electromagnetic radiation, excessive strain on the power grid, or mobile phone use".[10]

The other candidate cause which you implied by sharing a link is sunspot activity. It does, as the page to which you linked suggests, appear to be correlated with the cumulative deviation of RNG network variance. This is very intriguing, but it is far from a non-anomalous explanation in and of itself, and the researchers themselves do not seem to think that it can be developed into one. They suggest instead a possible anomalous elaboration in terms of sunspot activity affecting human psychology and emotion which in turn affect both GCP network variance and world events. Even this seems to me to be quite difficult to make sense of given the complication I mentioned above: this seems to entail that sunspot activity affects the human scheduling of predetermined events, which even for parapsychology is pretty strange.

In any case, perhaps you can explain how a non-anomalous hypothesis based in sunspot activity as a cause fulfills the three explanatory requirements, because, as for electromagnetism, I cannot see that or how it does.

The examples I picked had all been tested or mentioned as factors which were known to affect the RNG output (except for pirate activity which was a tongue-in-cheek reference to the Global Warming/Pirate activity correlation). I'm not sure why you are asking us to assume those factors are anomalous unless proven otherwise (which also applies to the major world events factor). 

Quote:(Editing note added by me).

Given that the experimenters have been pretty clear that they have not engaged in data mining, I think that this would amount to deliberate fraud. I personally can't rule that out, especially given that all I know of the experiment is what I've read about it online, but I'm assuming honesty for the sake of argument, and because at this point I have no positive evidence of fraud nor compelling reason to expect it. I think if you have any positive evidence you should share it, because what little support you have provided so far is weak:

It nice that you choose to be so accommodating. Unfortunately, this isn't considered fraud, but rather common practice which has been only recently begun to be discouraged among psychologists.

Quote:It seems likely that for at least a subset of the many formally-registered events which are scheduled or otherwise known in advance, such as New Year's Eve, an hypothesis is registered prior to each event's occurring. I have as-yet been unable to confirm that this is the case though, nor how the significance of the results for those events compares to that for the rest of the events. I would be interested if anybody can point me to a source of information on this.

The cumulative New Year's Eve data is shown here:
http://noosphere.princeton.edu/events/newyear.2015.html

Bancel compared registered and unregistered Earth and Peace days. 

Quote:However, given the earliest possible date at which data became available via that web page, at most only seven of the 513 formally-registered events could have been monitored live.[11] If the live data was available elsewhere prior to that, then perhaps you can point out where.

In any case, given that events have been excluded from the experiment due to inadvertent data peeking[12], data peeking seems an unlikely explanation.

Sorry, but I don't think it does parapsychology any good to ask other researchers to trust that they have been inordinately incurious. And offering a confession, to discourage skepticism is also a well-known tactic.

Linda
(2018-10-26, 04:05 AM)fls Wrote: [ -> ](as you pointed out, I missed counting some of the "significant" events - there were 60, not 42 (when 51 would be expected), at the 10% cut-off the researchers used (inattention on my part))

I have no idea what you are talking about. There are so many errors in this statement that it is hard to know where to begin.. but I will find a place.

For a start, I did not point out that you missed counting anything - if I had, I would have been wrong, because you did not. I verified your count for myself. Your count of 42 events was correct given a cut-off p-value of 0.05. The problem that I did point out is that you mistakenly assumed a p-value of double that - 0.1 - when calculating the expected number of events. The cut-off the researchers used was 5%, not 10% as you now assert. But if it had been 10%, then the event count would have been not, as you now assert, 60, but 100 (for <10%) or 101 (for <=10% - there is one event, #268, with a p-value given of exactly 0.1).

And, by the exact binomial test, the one-tailed p-value for 100 successes out of 513 with a probability of success of 0.1 is 8.17 x 10-11. Again, consistent with the overall significance of the results.

The rest of your response is similarly sloppy and ill-considered.

I would honestly have been interested in a careful, thoughtful response to the explanatory requirements that I laid out for a non-anomalous hypothesis. It is possible that something has been missed and that there is a non-anomalous explanation, and maybe you are capable of providing one. If you were under time pressure or for whatever reason unable to devote enough brainpower to it then I am happy for you to take your time and give it another, proper attempt. But, frankly, the response you've provided is an insult.
(2018-10-26, 05:28 AM)Laird Wrote: [ -> ]I have no idea what you are talking about.

I'm sorry. I thought it was obvious.

The results which are highlighted in bold red and green, which are called "significant", are those results with a z-score of +/- 1.64 or greater (lesser for the negative values). A z-score of +/- 1.64 represents 90% of the data in a theoretical distribution, so those outside of +/-1.64 represent 10%.

Quote:The rest of your response is similarly sloppy and ill-considered.

I would honestly have been interested in a careful, thoughtful response to the explanatory requirements that I laid out for a non-anomalous hypothesis. It is possible that something has been missed and that there is a non-anomalous explanation, and maybe you are capable of providing one. If you were under time pressure or for whatever reason unable to devote enough brainpower to it then I am happy for you to take your time and give it another, proper attempt. But, frankly, the response you've provided is an insult.

I did put care and thought into my response. The problem is that whether or not the issues raised by me and other researchers/interested parties are relevant has not been tested in the experiments, so it is not possible to tell whether or not the results are "anomalous". The researchers themselves have not attempted to establish that the results are anomalous, only that they haven't found a clear explanation (without actually designing experiments which would be a good test of these issues).

There are serious problems even with the idea that deviations are associated with major world events, even if we ignore that the presumption that this is due to a "global consciousness" remains completely unestablished. Instead, it has been shown (albeit to a small degree) finding "statistically significant" deviations depends upon a fortuitous selection from a pool of equivalent events, rather than considering the pool as a whole or a representative sample from that pool (the Peace and Earth day analysis by Bancel). And in the one instance where the researchers have tied their hands, so that the entire pool must be considered, the results are unremarkable (the New Year's day results). Whether or not these concerns apply to the whole database is unknown, but it does seem to put these findings in the hands of the researchers, rather than something which is lurking in the data.

It seems unreasonable to ask me to explain the findings (when we both know that you are going to reject any plausible explanations out of hand anyways) within a day or two, when the researchers have had decades to make this attempt and have not bothered doing so.

Linda
(2018-10-26, 12:12 PM)fls Wrote: [ -> ]I'm sorry. I thought it was obvious.

The results which are highlighted in bold red and green, which are called "significant", are those results with a z-score of +/- 1.64 or greater (lesser for the negative values). A z-score of +/- 1.64 represents 90% of the data in a theoretical distribution, so those outside of +/-1.64 represent 10%.

Linda, the experimental hypothesis predicts a positive deviation. The 10% should obviously then come from the right tail of the distribution, not from both tails. This is even more obvious given that the p-values listed for the "significant" negatively-deviating event scores are larger than 0.95 not smaller than 0.05.

As I pointed out, there are 100 events within the top 10% of the right tail, with a one-tailed p-value of 8.17 x 10-11, consistent with the overall results.

The rest of your response, again, is similarly spun, and either ignores substantive issues or misrepresents them. I do not see any value in addressing it.
(2018-10-26, 05:30 PM)Laird Wrote: [ -> ]Linda, the experimental hypothesis predicts a positive deviation.

Not really. Even the researchers point out that they had to cast around a bit to figure out how this may show up in the data, including the direction of the effect. And this recent discussion started because malf suggested Radin and Nelson as a source of electromagnetic theories of consciousness, given that they must have some theory that they were testing with their RNG's. This possibly was rejected by proponents, though, and a theoretical basis for this hasn't been offered anyways. It's pretty clear that if their choice amongst various events and various outcomes had led to a negative deviation, that would have become the predicted direction.

And this doesn't matter anyways, since merely predicting a positive deviation is insufficient reason to justify one-tailed testing. One-tailed testing is valid under specific circumstances, and this isn't one of them.
 
Quote:The 10% should obviously then come from the right tail of the distribution, not from both tails.

Don't tell me this. Tell Nelson or whomever it was who marked both tails of the distribution as significant.
 
Quote:The rest of your response, again, is similarly spun, and either ignores substantive issues or misrepresents them. I do not see any value in addressing it.

Right. Like I said, we are here for appearances sake, not because you are willing to engage with what we have to say. So it is to be expected that any and all thoughtful and careful responses on my part will be met only with insults on your part. That I went along with what Nelson marked as "significant", and you chose to characterize that as "spin", is simply yet another egregious example.

I am willing to be pleasantly shocked and surprised on this point, but I'm certainly not holding my breath.

Linda
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31