The Global Consciousness Project

350 Replies, 48897 Views

(2018-10-26, 05:28 AM)Laird Wrote: I have no idea what you are talking about.

I'm sorry. I thought it was obvious.

The results which are highlighted in bold red and green, which are called "significant", are those results with a z-score of +/- 1.64 or greater (lesser for the negative values). A z-score of +/- 1.64 represents 90% of the data in a theoretical distribution, so those outside of +/-1.64 represent 10%.

Quote:The rest of your response is similarly sloppy and ill-considered.

I would honestly have been interested in a careful, thoughtful response to the explanatory requirements that I laid out for a non-anomalous hypothesis. It is possible that something has been missed and that there is a non-anomalous explanation, and maybe you are capable of providing one. If you were under time pressure or for whatever reason unable to devote enough brainpower to it then I am happy for you to take your time and give it another, proper attempt. But, frankly, the response you've provided is an insult.

I did put care and thought into my response. The problem is that whether or not the issues raised by me and other researchers/interested parties are relevant has not been tested in the experiments, so it is not possible to tell whether or not the results are "anomalous". The researchers themselves have not attempted to establish that the results are anomalous, only that they haven't found a clear explanation (without actually designing experiments which would be a good test of these issues).

There are serious problems even with the idea that deviations are associated with major world events, even if we ignore that the presumption that this is due to a "global consciousness" remains completely unestablished. Instead, it has been shown (albeit to a small degree) finding "statistically significant" deviations depends upon a fortuitous selection from a pool of equivalent events, rather than considering the pool as a whole or a representative sample from that pool (the Peace and Earth day analysis by Bancel). And in the one instance where the researchers have tied their hands, so that the entire pool must be considered, the results are unremarkable (the New Year's day results). Whether or not these concerns apply to the whole database is unknown, but it does seem to put these findings in the hands of the researchers, rather than something which is lurking in the data.

It seems unreasonable to ask me to explain the findings (when we both know that you are going to reject any plausible explanations out of hand anyways) within a day or two, when the researchers have had decades to make this attempt and have not bothered doing so.

Linda
(This post was last modified: 2018-10-26, 12:12 PM by fls.)
[-] The following 2 users Like fls's post:
  • Max_B, Steve001
(2018-10-26, 12:12 PM)fls Wrote: I'm sorry. I thought it was obvious.

The results which are highlighted in bold red and green, which are called "significant", are those results with a z-score of +/- 1.64 or greater (lesser for the negative values). A z-score of +/- 1.64 represents 90% of the data in a theoretical distribution, so those outside of +/-1.64 represent 10%.

Linda, the experimental hypothesis predicts a positive deviation. The 10% should obviously then come from the right tail of the distribution, not from both tails. This is even more obvious given that the p-values listed for the "significant" negatively-deviating event scores are larger than 0.95 not smaller than 0.05.

As I pointed out, there are 100 events within the top 10% of the right tail, with a one-tailed p-value of 8.17 x 10-11, consistent with the overall results.

The rest of your response, again, is similarly spun, and either ignores substantive issues or misrepresents them. I do not see any value in addressing it.
[-] The following 2 users Like Laird's post:
  • Doug, tim
(2018-10-26, 05:30 PM)Laird Wrote: Linda, the experimental hypothesis predicts a positive deviation.

Not really. Even the researchers point out that they had to cast around a bit to figure out how this may show up in the data, including the direction of the effect. And this recent discussion started because malf suggested Radin and Nelson as a source of electromagnetic theories of consciousness, given that they must have some theory that they were testing with their RNG's. This possibly was rejected by proponents, though, and a theoretical basis for this hasn't been offered anyways. It's pretty clear that if their choice amongst various events and various outcomes had led to a negative deviation, that would have become the predicted direction.

And this doesn't matter anyways, since merely predicting a positive deviation is insufficient reason to justify one-tailed testing. One-tailed testing is valid under specific circumstances, and this isn't one of them.
 
Quote:The 10% should obviously then come from the right tail of the distribution, not from both tails.

Don't tell me this. Tell Nelson or whomever it was who marked both tails of the distribution as significant.
 
Quote:The rest of your response, again, is similarly spun, and either ignores substantive issues or misrepresents them. I do not see any value in addressing it.

Right. Like I said, we are here for appearances sake, not because you are willing to engage with what we have to say. So it is to be expected that any and all thoughtful and careful responses on my part will be met only with insults on your part. That I went along with what Nelson marked as "significant", and you chose to characterize that as "spin", is simply yet another egregious example.

I am willing to be pleasantly shocked and surprised on this point, but I'm certainly not holding my breath.

Linda
(This post was last modified: 2018-10-26, 11:00 PM by fls.)
Quote:Laird: Linda, the experimental hypothesis predicts a positive deviation.

Linda: Not really.

Yes, really. It is right there in black and white in Table 1 in the very results page that we have been referencing, under the heading "Hypothesis": "Positive Deviation".

(2018-10-26, 10:55 PM)fls Wrote: And this doesn't matter anyways, since merely predicting a positive deviation is insufficient reason to justify one-tailed testing.

The point is that the overall results were as predicted - that is, with a positive deviation. The question then is what counts would we expect given the results we got? After all, we are, presumably, trying to work out whether these counts are consistent with the overall results. And, of course, given the results, which tend (significantly) to deviate positively, we would expect fewer than 5% of events to be in the nominally lowest 5% of the distribution, whereas we would expect more than 5% of events to be in the nominally highest 5% of the distribution. So, we can't just mush these two 5% segments at opposite ends of the population together, can we? We have to consider them separately: one to test for "less than" and the other to test for "greater than".

Given the results, then, we would expect for the nominally top 5% of the distribution to find substantially more than 5% of the events (0.05 x 513 = 25.65 events) - and we do: we find 42.

And, given the results, we would expect for the nominally bottom 5% of the distribution to find substantially fewer than 5% of the events (again, 25.65 events) - and, again, we do: we find only 18.

Or, again, as in my last post, we could consider the nominally top 10% of the distribution, expecting to find there substantially more than 10% of the events (0.1 x 513 = 51.3 events) - and, as I pointed out, we again do: we find 100 (or 101 if you count the event with a p-value of exactly 0.1).

So, in the end, I simply do not see anything inconsistent in these counts.

(I just want to add that I am not statistically savvy enough to be sure that the exact binomial test which I'd used in previous posts to calculate p-values for these counts is the most appropriate one - intuitively though it seems to be at least a reasonable approximation. Maybe somebody more stats-minded can clue me in).

(2018-10-26, 10:55 PM)fls Wrote: [Y]ou are [not] willing to engage with what we have to say. So it is to be expected that any and all thoughtful and careful responses on my part will be met only with insults on your part.

(My editing notes).

I see the opposite. In my main response to you, I carefully and methodically laid out the explanatory challenges that a non-anomalous hypothesis faces. You did not engage with this - the most substantive part of what I laid out - in the slightest. Pretty much every part of your response ignored pretty much every point I made - either that or it made no sense. To me, that is the insult. Pointing out that that is what you have done is not an insult. It is an appropriate statement of fact.

All of that said: I understand that it is not pleasant for you to read all of this. It is not my intent to do you harm, just to get at the truth. And I do think that you are intelligent, which is why it was especially disappointing to read your response to my main post, which was a labour of love. I hope that you do go back to it and give it a little more consideration... and let me know if I can clarify any of the requirements and why I do not think they have yet been met.
(This post was last modified: 2018-10-27, 03:25 AM by Laird.)
[-] The following 2 users Like Laird's post:
  • tim, Doug
(2018-10-27, 03:04 AM)Laird Wrote: Yes, really. It is right there in black and white in Table 1 in the very results page that we have been referencing, under the heading "Hypothesis": "Positive Deviation".

The point is that the overall results were as predicted - that is, with a positive deviation. The question then is what counts would we expect given the results we got? After all, we are, presumably, trying to work out whether these counts are consistent with the overall results. And, of course, given the results, which tend (significantly) to deviate positively, we would expect fewer than 5% of events to be in the nominally lowest 5% of the distribution, whereas we would expect more than 5% of events to be in the nominally highest 5% of the distribution. So, we can't just mush these two 5% segments at opposite ends of the population together, can we? We have to consider them separately: one to test for "less than" and the other to test for "greater than".

Given the results, then, we would expect for the nominally top 5% of the distribution to find substantially more than 5% of the events (0.05 x 513 = 25.65 events) - and we do: we find 42.

And, given the results, we would expect for the nominally bottom 5% of the distribution to find substantially fewer than 5% of the events (again, 25.65 events) - and, again, we do: we find only 18.

Or, again, as in my last post, we could consider the nominally top 10% of the distribution, expecting to find there substantially more than 10% of the events (0.1 x 513 = 51.3 events) - and, as I pointed out, we again do: we find 100 (or 101 if you count the event with a p-value of exactly 0.1).

So, in the end, I simply do not see anything inconsistent in these counts.

(I just want to add that I am not statistically savvy enough to be sure that the exact binomial test which I'd used in previous posts to calculate p-values for these counts is the most appropriate one - intuitively though it seems to be at least a reasonable approximation. Maybe somebody more stats-minded can clue me in).

These tests and claims are only appropriate if the only results which Nelson et. al. would have taken as a confirmation of their speculation are the results which they found. A little thought on this will show that this is not the case. That is, if a different set of outcomes (e.g. the mean deviations in one group of data had been variance deviations instead and vice versa, or deviations in the third or fourth moments were used instead, or...(use your imagination)) had given an overall significant result, or a different capricious set of major world events, or different amounts of resolution, or deviations in the opposite direction, had given an an overall significant result, then this would have been taken as a confirmation of Nelson et. al.'s speculation, as well. This means that your consideration of which tails of the normal distribution are relevant, and to what degree, has to take into account the wide variety of outcomes which would have also been claimed as confirmation, had those results been found instead. Plus binomial calculations need to account for a much greater number of attempts than those specific to what was found. 

Quote:(My editing notes).

I see the opposite. In my main response to you, I carefully and methodically laid out the explanatory challenges that a non-anomalous hypothesis faces. You did not engage with this - the most substantive part of what I laid out - in the slightest. Pretty much every part of your response ignored pretty much every point I made - either that or it made no sense. To me, that is the insult. Pointing out that that is what you have done is not an insult. It is an appropriate statement of fact.

You are correct. I did not address each of your points in detail (I did read it carefully). Because, as I pointed out, it is unreasonable to suggest non-anomalous explanations need to address these points given that Nelson et. al.'s supposed anomalous explanation does not address them either. Nor have you or anyone else given us a way to distinguish between what would be "anomalous" and what would be "non-anomalous" to begin with.

What was an insult was that you characterized my part in this discussion as "spin", "error", "sloppy", "ill-considered", just because you, by your own admission, were not statistically savvy enough, and because you ignored (twice) my explanation of why your requirements for non-anomalous explanations had not been met by the supposed anomalous explanation from Nelson et. al. You will notice that your failure to address my points (and your selective, out-of-context quoting which allows you to ignore the conditions I mentioned and respond as though I said something else) was not met with a series of insults from me. 

Quote:All of that said: I understand that it is not pleasant for you to read all of this. It is not my intent to do you harm, just to get at the truth. And I do think that you are intelligent, which is why it was especially disappointing to read your response to my main post, which was a labour of love. I hope that you do go back to it and give it a little more consideration... and let me know if I can clarify any of the requirements and why I do not think they have yet been met.

My position is that no explanations - anomalous or otherwise - have met those requirements. If one wants to demonstrate that "global consciousness" does, going forward, then the following major issues need to be addressed. A valid way of identifying major world events needs to be developed. A valid measure of "global consciousness" needs to be developed which will allow a comparison between major world events which involve "global consciousness" and those which do not. Data security such that it would be near impossible for the data undergoing analysis to have been seen by anyone involved in decision-making. A specific and valid outcome measure needs to be determined. If the effect holds up under those conditions, specific experiments could be developed to elucidate the mechanism. Someone who cares (I don't) would need to come up with a (valid) way to label that mechanism anomalous or non-anomalous.

Linda
(This post was last modified: 2018-10-27, 06:35 PM by fls.)
(2018-10-27, 06:33 PM)fls Wrote: These tests and claims are only appropriate if the only results which Nelson et. al. would have taken as a confirmation of their speculation are the results which they found.

I don't think that the test is appropriate in any case, for the reason I explained: that (whether the average effect size is positive or negative) the increase in the number of events at one tail will be offset by the decrease in the number of events at the other. Given this offsetting, it turns out that the test in this case (the binomial probability of getting at least 60 "successes" out of 513 at a probability of "success" of 10%) doesn't reach significance at all (even at a 0.1 level).

Incidentally, it has been pointed out to me that on the assumption that the average effect size of 0.327 represents a shift to the right, by 0.327 standard deviations, of the mean of the normal distribution of the results, we can quantify the expected count of events in each of the top 5% and bottom 5% of the distribution (whose shifts, given the effect size, I had in my previous post referred to merely as "substantial"). We can then sum the two to find the expected combined figure (compared to the actual figure of 60). The answer is... interesting. See what you come up with.

It has also been pointed out to me that we can explore the extent to which the partial cancellation that I noted leads to a low power for the test (again, on the assumption of the previous paragraph). Even at an alpha of 0.1, the power is... not very impressive. See what you come up with.

In any case, none of this takes anything away from the overall significance (over seven sigma) of the results, so it's something of a(n admittedly educational) diversion.

(2018-10-27, 06:33 PM)fls Wrote: A little thought on this will show that this is not the case. That is, if a different set of outcomes (e.g. the mean deviations in one group of data had been variance deviations instead and vice versa, or deviations in the third or fourth moments were used instead, or...(use your imagination)) had given an overall significant result, or a different capricious set of major world events, or different amounts of resolution, or deviations in the opposite direction, had given an an overall significant result, then this would have been taken as a confirmation of Nelson et. al.'s speculation, as well.

Well, given that both the set of events (as we already know) and the hypotheses and analytical recipes for them were specified before looking at the data, changing them after the fact would amount to deliberate fraud.

(2018-10-27, 06:33 PM)fls Wrote: My position is that no explanations - anomalous or otherwise - have met those requirements.

So, you can't suggest a single possible non-anomalous hypothesis. OK. That's what I was trying to find out through this back-and-forth.

Anomalous explanations are another question. I haven't read all of the relevant papers, so I won't try to answer that question right now.
[-] The following 1 user Likes Laird's post:
  • Doug
(2018-10-30, 11:13 AM)Laird Wrote: I don't think that the test is appropriate in any case, for the reason I explained: that (whether the average effect size is positive or negative) the increase in the number of events at one tail will be offset by the decrease in the number of events at the other. Given this offsetting, it turns out that the test in this case (the binomial probability of getting at least 60 "successes" out of 513 at a probability of "success" of 10%) doesn't reach significance at all (even at a 0.1 level).

Incidentally, it has been pointed out to me that on the assumption that the average effect size of 0.327 represents a shift to the right, by 0.327 standard deviations, of the mean of the normal distribution of the results, we can quantify the expected count of events in each of the top 5% and bottom 5% of the distribution (whose shifts, given the effect size, I had in my previous post referred to merely as "substantial"). We can then sum the two to find the expected combined figure (compared to the actual figure of 60). The answer is... interesting. See what you come up with.

Why would I do that? That particular calculation is not valid with respect to understanding whether or not these results are remarkable. Again, it makes the mistake of applying a priori statistics to post hoc findings. 

Quote:It has also been pointed out to me that we can explore the extent to which the partial cancellation that I noted leads to a low power for the test (again, on the assumption of the previous paragraph). Even at an alpha of 0.1, the power is... not very impressive. See what you come up with.

In any case, none of this takes anything away from the overall significance (over seven sigma) of the results, so it's something of a(n admittedly educational) diversion.

Well, given that both the set of events (as we already know) and the hypotheses and analytical recipes for them were specified before looking at the data, changing them after the fact would amount to deliberate fraud.

No it wouldn't. Bem and Radin (as well as others) have been caught changing their hypotheses after the fact and the parapsychology community (or the psychology community in Bem's case) isn't leveling that charge. I personally think it rises to the level of deceit in some cases, but I have to recognize that unless you put barriers in place to stop it, it's S.O.P. Why do you think scientists in other fields don't take this seriously? Because they are aware that this is what researchers do, even if they are not supposed to. Especially since Bancel's findings are consistent with fortuitous selection.

Think about it...if a pharmaceutical company tried to tell you that they have a new (expensive) perfectly safe wonder drug, proven by taking a highly selected sample from all the people who used the drug in their Phase I trials, would you seriously believe them?

Quote:So, you can't suggest a single possible non-anomalous hypothesis.

What are you talking about? I suggested a number of possible non-anomalous hypotheses. None of them have been ruled out. And Nelson et. al.'s hypothesis of "Global Consciousness" hasn't even been ruled-in, never mind that nobody has specified exactly what would make the mechanism "anomalous" if or when somebody gets around to figuring out the mechanism (if there is even a mechanism to be found, given that the effect disappears in non-selected samples). Don't mistake the p-value for any sort of measure of the robustness of the effect, or as evidence of "causation". At the moment, it's a measure of a selection bias.

Linda
(This post was last modified: 2018-10-30, 06:16 PM by fls.)
(2018-10-24, 11:22 PM)Max_B Wrote: IIRC (it’s been a long time since I discussed this with Chris) the GCP uses RNG’s that are environmentally coupled. Voltage and Temperature being two well known causes of poor random performance on RNG devices similar to the GCP’s devices. Nowadays for critical projects, people have moved on to using true RNG’s, because the problems with the older types of RNG’s are well known.

I would expect that the GCP are picking up environmental signals in their environmentally coupled RNG’s, so should try to test for this, and remove as much environmental coupling as possible. Their method of selecting events is also not well documented, and needs addressing. There is a lot that could be done to address these weaknesses. Some of these weaknesses have been mentioned before, years ago by other people who have written papers about it.

I’m very confident with my own conclusion that I can safely ignore the GCP’s results until they properly improve their experiment, at which time it’s very likely their results will simply disappear. The same goes for Radins work with RNG’s.

(2018-10-30, 06:15 PM)fls Wrote: Why would I do that? That particular calculation is not valid with respect to understanding whether or not these results are remarkable. Again, it makes the mistake of applying a priori statistics to post hoc findings. 


No it wouldn't. Bem and Radin (as well as others) have been caught changing their hypotheses after the fact and the parapsychology community (or the psychology community in Bem's case) isn't leveling that charge. I personally think it rises to the level of deceit in some cases, but I have to recognize that unless you put barriers in place to stop it, it's S.O.P. Why do you think scientists in other fields don't take this seriously? Because they are aware that this is what researchers do, even if they are not supposed to. Especially since Bancel's findings are consistent with fortuitous selection.

Think about it...if a pharmaceutical company tried to tell you that they have a new (expensive) perfectly safe wonder drug, proven by taking a highly selected sample from all the people who used the drug in their Phase I trials, would you seriously believe them?


What are you talking about? I suggested a number of possible non-anomalous hypotheses. None of them have been ruled out. And Nelson et. al.'s hypothesis of "Global Consciousness" hasn't even been ruled-in, never mind that nobody has specified exactly what would make the mechanism "anomalous" if or when somebody gets around to figuring out the mechanism (if there is even a mechanism to be found, given that the effect disappears in non-selected samples). Don't mistake the p-value for any sort of measure of the robustness of the effect, or as evidence of "causation". At the moment, it's a measure of a selection bias.

Linda

You know what's funny? Laird equating correlations with causation a few postings back. How many times have we both heard this dismissive statement: brain correlates aren't equal to causation ( of consciousness) or something similar. I guess when one wants something to be true it's fair to use it.
[-] The following 1 user Likes Steve001's post:
  • fls
Jeffrey Mishlove has a one-hour interview with Roger Nelson about the Global Consciousness Project in his "New Thinking Allowed" series:

[-] The following 1 user Likes Guest's post:
  • Laird
Thanks for sharing that video, Chris. It seems like a helpful overview. I noticed, though, that, as in the Psi Encyclopedia's GCP entry, there was little to no discussion of competing hypotheses when it comes to the GCP, although decision augmentation theory was briefly mentioned in the discussion about the PEAR lab experiments.

I'm now prompted to get back into this thread and offer a few missing responses. First off, quoting Max's last response to me (emphasis added by me):

(2018-10-26, 06:45 AM)Max_B Wrote: The comments I’ve made are definitely not irrelevant, both the weaknesses of the experiments event selection, and the problems with using environmentally coupled RNG devices in this way. It would be useful if you spent some time studying the literature on RNGs, their different designs, problems (voltage, temperature etc), and limitations (particularly relevant to their use for long range simulations and for encryption). There are some great review papers available on line, and some great videos on encryption too. Everything Chris and I discussed is on that other GCP thread, somewhere on PSIQuest, couple of links in there too.

Max, I think you'd missed that this thread is that same thread - revived - where you and Chris had your discussion. I've reviewed that discussion and I've also done the homework you suggested, by reading in full the papers to which you've linked[14]: paper #1 is PUF-Based Random Number Generation and paper #2 is True Random Number Generators, from which you've quoted several times in this thread already.

On review of the papers and your discussion, I don't find anything that raises any red flags for the GCP.

Your initial argument was that changes in temperature and voltage could cause significant bias in random number generators (RNGs). This, though, for two reasons, does not hold:
  1. Even though the papers raise such environmentally-induced bias as a possibility for more basic components of RNGs, they make it clear that additional components and design features mitigate the seriousness of this bias.
  2. Whatever bias remains is entirely mitigated by XOR processing in the RNGs used in the GCP.
Later, you raised another argument, to which as yet you haven't received a response, based on the claim that an inter-RNG environmental signal is anyway buried in the data despite overall bias being removed by XORing. To succeed, though, this argument would require that pre-XOR biases which are correlated amongst RNGs survived XOR processing, which in turn would require that a common XOR mask be time-synchronised across all RNGs, but this is not the case. I offer a more detailed response below.

First, though, here are some details re your initial argument as to temperature- and voltage-induced bias.

Paper #1 and reason #1

Paper #1 deals with "a candidate hardware random number generator" using "Physical Random Functions" or "Physical Unclonable Functions" (PUFs). In other words, this is not so much a review paper of RNGs as a description of the design and implementation of a single (type of) RNG. Given that its design appears to be different from the RNGs used in the GCP, it has only limited relevance to the GCP. Let's look at it in any case.

In post #26 in the Skeptiko thread (see [14]), you quoted from section 2 of this paper words to the effect that changing the temperature and voltage affected the noise of the "PUF delay circuit" by upward of 9%. You seemed to be implying that temperature and voltage changes could significantly reduce the randomness of this RNG in practice. However, the PUF delay circuit is only one component of this RNG. Additional processes, described in the paper's section 3, are used to manage any noise in the PUF, and section 4 summarises how well it passes statistical tests in practice: the very lowest of the low results of all randomness tests for the least random results was still 93.5%. The authors conclude that this RNG "has been shown to produce outputs which pass most statistical tests for randomness. This suggests that PUF-based random number generation is a cheap and viable alternative to more expensive and complicated hardware random number generation". They do not raise any concerns about voltage and temperature in practice, and appear to endorse this device for ordinary usage - although, again, it was not used in the GCP. In summary, paper #1 raises no red flags for the GCP.

Paper #2 and reason #1

Paper #2 is a remarkably thorough review. As I understand it, the type of RNGs used in the GCP would be categorised in this paper as "noise-based", and, in post #30 in the Skeptiko thread in which you linked to this paper (again, see [14]), you quoted from the section devoted to them thus: "And then there is a problem of stability: even the smallest drift of the mean value (for example due to temperature or supply voltage change) will create a large bias".

The implication seems to be again that temperature and voltage are a serious problem for the randomness of these RNGs in practice. However, the statement that you quoted refers only to the "general idea" of noise-based RNGs, and, very shortly after it, the authors write that "Going from this basic circuit, researchers have proposed many circuits whose aim is to improve the randomness, notably the bias".

Paper #2 and reason #2

Further on in the same section, the authors write (which you quoted as part of a longer quote in post #72 to this thread): "For all noise-based generators, some kind of post-processing is required. In some cases a simple ad hoc post-processing such as XORing several subsequent bits or von Neumann [94] de-biasing may be good enough".

As Chris explained in post #98, in the case of the RNGs used in the GCP, simple ad hoc XOR post-processing does indeed remove bias even if, as the paper goes on to add, and which you quoted in the post just before Chris's, it "may not be sufficient to eliminate correlations among bits" (emphasis added by me). Correlations among bits are, however, as I understand it, irrelevant to the GCP, so XORing resolves the only potential problem here: bias (too many ones or zeroes).

You seemed to accept Chris's explanation given that which you posted immediately after it (in post #99), although you later expressed confusion in post #109, which Chris cleared up in post #110. (I address below the above-mentioned argument which you raised in post #111).

Paper #2 and theoretical versus empirical provability

I think it's also worth noting that one of the focusses of paper #2 is the provability of the randomness of RNGs, where the sense of provability is theoretical and based on physical principles applicable to the system, rather than based on any sort of empirical testing after the device has been built. The paper focusses to a large extent on explaining why (theoretical) provability is not achievable in many cases, including for noise-based RNGs. However, this does not mean or imply that such devices do not exhibit very good randomness in practice, and the paper acknowledges that these devices often can and do pass empirical tests - tests which for practical purposes demonstrate their randomness, at least at the time of testing.

Because the RNGs used in the GCP have been tested in this way, and because they go through periods of calibration testing and have also been found overall not to deviate from chance expectation over the course of the experiment, theoretical provability seems to be a moot point. So, paper #2 likewise raises no red flags for the GCP.

Buried environmental signals

In post #111, you suggested that an environmental signal could anyhow be buried in the data, a claim that you repeated in the much more recent post #216 to me. I think the general type of hypothesis that's suggested by what you wrote in these posts is something like this:

Even though XORing removes systemic bias and ensures that the average remains at chance expectation, it remains possible that the momentary, second-by-second outputs of all RNGs are affected by common environmental signals such that they are momentarily all correlated.

This would seem to have to work as follows: some environmental signal which affects all RNGs simultaneously (let's say, for example, the Earth's magnetic field) causes at the physical level some bias in the raw bits of the RNGs which is similar across all RNGs, and then, after being XORed, this (momentary) shared, inter-RNG bias remains.

This would seem, though, to require that the same XOR mask is used for all RNGs when XORing the raw bits of each RNG, otherwise the bias would be affected by XORing in different ways at different RNGs, which in general would eliminate it as a shared bias (the biases at different RNGs would in general end up being different). This in turn means that XOR masks would have to be time-synchronised across all RNGs.

Here, then, is the problem: they're not. For a start, the two different types of RNGs use different XOR masks, and the experimental effects (inter-RNG correlations) are observed even between RNGs of both types. Secondly, even for RNGs of the same type, the offsets of the bitmasks are simply not time-synchronised across all RNGs - and certainly not with the precision that would be required for biases in the raw bits to survive XOR processing across all RNGs.

Peter Bancel discusses this in more detail in the section "The XOR Problem" between pages 10 and 11 of his 2014 paper, An analysis of the Global Consciousness Project.

Too, this is presumably the essential mechanism which any non-anomalous explanation of the GCP's results would use. The fact that neither Linda nor anybody else has suggested a way of overcoming it is why I remain confident in claiming that nobody has yet suggested a single possible non-anomalous hypothesis - that is, one which could clear this hurdle and others like it. Possibility is constrained by facts, and whilst in some sense it is "possible" that the GCP results are due to, say, geomagnetic fields, given the constraining fact of unsynchronised XOR masks, this no longer seems possible. I'm still open to any non-anomalous hypothesis that anybody might be able to suggest which accounts for all the facts, but nobody in this thread has yet provided one.

Finally: Linda claims in post #225 that neither has an anomalous explanation met the requirements which I laid out in my original lengthy response in post #213 (to which we can add this fourth requirement of defeating the unsynchronised XOR masks). This, though, is not true. For example, an anomalous explanation could be teleological: in other words, it could work backwards from its desired outcome of an ongoing momentary post-XOR correlation of RNG outputs; it would thus not, as is a non-anomalous explanation, be constrained to act blindly on the raw bits of RNGs, it could "arrange" such that momentary biases survived XOR processing in a way that a blind, physical process cannot.

A few other responses to Linda's most recent post:

(2018-10-30, 06:15 PM)fls Wrote: Why would I do that? That particular calculation is not valid with respect to understanding whether or not these results are remarkable. Again, it makes the mistake of applying a priori statistics to post hoc findings.

So, why did you even remark on the number of events in the extreme 10% then if you thought that assessing that number would not anyway be valid?

(2018-10-30, 06:15 PM)fls Wrote: Bem and Radin (as well as others) have been caught changing their hypotheses after the fact

Please let us know exactly what you're referring to.

Quote:Laird: [C]hanging [prespecified hypotheses] after the fact would amount to deliberate fraud.

Linda: No it wouldn't [because the (para)psychology community isn't leveling that charge].

That's a non sequitur: the definition of deliberate fraud does not entail anybody levelling any charge.

(2018-10-30, 06:15 PM)fls Wrote: Why do you think scientists in other fields don't take this seriously? Because they are aware that this is what researchers do

Oh? Can you justify that claim?

(2018-10-30, 06:15 PM)fls Wrote: Bancel's findings are consistent with fortuitous selection.

That is incorrect, for reasons detailed in my original lengthy response to you (again, post #213).

[14] In post #54, you'd (Max had) written that you'd "done all this on Skeptiko, around April this year [2017]" and that you'd "posted a couple of papers". Seven posts later, in post #61, Chris identified the Skeptiko thread to which you were referring as Closer to the Truth interviews with Josephson and Wolf. You didn't contest this so I assume that Chris was correct.

I looked through that thread and I think I've identified the couple of papers to which you referred. In post #26 in that Skeptiko thread, you linked to the paper PUF-Based Random Number Generation (let's call this "paper #1"). Then, in post #30 in that Skeptiko thread, you linked to the paper True Random Number Generators, although the original link that you provided no longer works, so I had to do a bit of digging to locate it at the link I've provided (let's call this "paper #2").
(This post was last modified: 2019-01-17, 10:17 AM by Laird.)
[-] The following 2 users Like Laird's post:
  • Ninshub, Doug

  • View a Printable Version
Forum Jump:


Users browsing this thread: 8 Guest(s)