Thanks for sharing that video, Chris. It seems like a helpful overview. I noticed, though, that, as in the Psi Encyclopedia's GCP entry, there was little to no discussion of competing hypotheses when it comes to the GCP, although decision augmentation theory was briefly mentioned in the discussion about the PEAR lab experiments.
I'm now prompted to get back into this thread and offer a few missing responses. First off, quoting Max's last response to me (emphasis added by me):
(2018-10-26, 06:45 AM)Max_B Wrote: [ -> ]The comments I’ve made are definitely not irrelevant, both the weaknesses of the experiments event selection, and the problems with using environmentally coupled RNG devices in this way. It would be useful if you spent some time studying the literature on RNGs, their different designs, problems (voltage, temperature etc), and limitations (particularly relevant to their use for long range simulations and for encryption). There are some great review papers available on line, and some great videos on encryption too. Everything Chris and I discussed is on that other GCP thread, somewhere on PSIQuest, couple of links in there too.
Max, I think you'd missed that this thread
is that same thread - revived - where you and Chris had your discussion. I've reviewed that discussion and I've also done the homework you suggested, by reading in full the papers to which you've linked
[14]: paper #1 is
PUF-Based Random Number Generation and paper #2 is
True Random Number Generators, from which you've quoted several times in this thread already.
On review of the papers and your discussion, I don't find anything that raises any red flags for the GCP.
Your initial argument was that changes in temperature and voltage could cause significant bias in random number generators (RNGs). This, though, for two reasons, does not hold:
- Even though the papers raise such environmentally-induced bias as a possibility for more basic components of RNGs, they make it clear that additional components and design features mitigate the seriousness of this bias.
- Whatever bias remains is entirely mitigated by XOR processing in the RNGs used in the GCP.
Later, you raised another argument, to which as yet you haven't received a response, based on the claim that an inter-RNG environmental signal is anyway buried in the data despite overall bias being removed by XORing. To succeed, though, this argument would require that pre-XOR biases which are correlated amongst RNGs survived XOR processing, which in turn would require that a common XOR mask be time-synchronised across all RNGs, but this is not the case. I offer a more detailed response
below.
First, though, here are some details re your initial argument as to temperature- and voltage-induced bias.
Paper #1 and reason #1
Paper #1 deals with "a candidate hardware random number generator" using "Physical Random Functions" or "Physical Unclonable Functions" (PUFs). In other words, this is not so much a review paper of RNGs as a description of the design and implementation of a single (type of) RNG. Given that its design appears to be different from the RNGs used in the GCP, it has only limited relevance to the GCP. Let's look at it in any case.
In
post #26 in the Skeptiko thread (see
[14]), you quoted from section 2 of this paper words to the effect that changing the temperature and voltage affected the noise of the "PUF delay circuit" by upward of 9%. You seemed to be implying that temperature and voltage changes could significantly reduce the randomness of this RNG in practice. However, the PUF delay circuit is only one component of this RNG. Additional processes, described in the paper's section 3, are used to manage any noise in the PUF, and section 4 summarises how well it passes statistical tests in practice: the very lowest of the low results of all randomness tests for the least random results was still 93.5%. The authors conclude that this RNG "
has been shown to produce outputs which pass most statistical tests for randomness. This suggests that PUF-based random number generation is a cheap and viable alternative to more expensive and complicated hardware random number generation". They do not raise any concerns about voltage and temperature in practice, and appear to endorse this device for ordinary usage - although, again, it was not used in the GCP. In summary, paper #1 raises no red flags for the GCP.
Paper #2 and reason #1
Paper #2 is a remarkably thorough review. As I understand it, the type of RNGs used in the GCP would be categorised in this paper as "noise-based", and, in
post #30 in the Skeptiko thread in which you linked to this paper (again, see
[14]), you quoted from the section devoted to them thus: "
And then there is a problem of stability: even the smallest drift of the mean value (for example due to temperature or supply voltage change) will create a large bias".
The implication seems to be again that temperature and voltage are a serious problem for the randomness of these RNGs in practice. However, the statement that you quoted refers only to the "general idea" of noise-based RNGs, and, very shortly after it, the authors write that "
Going from this basic circuit, researchers have proposed many circuits whose aim is to improve the randomness, notably the bias".
Paper #2 and reason #2
Further on in the same section, the authors write (which you quoted as part of a longer quote in
post #72 to this thread): "
For all noise-based generators, some kind of post-processing is required. In some cases a simple ad hoc post-processing such as XORing several subsequent bits or von Neumann [94] de-biasing may be good enough".
As Chris explained in
post #98, in the case of the RNGs used in the GCP, simple ad hoc XOR post-processing
does indeed remove bias even if, as the paper goes on to add, and which you quoted in the post just before Chris's, it "
may not be sufficient to eliminate correlations among bits" (emphasis added by me). Correlations among bits are, however, as I understand it, irrelevant to the GCP, so XORing resolves the only potential problem here: bias (too many ones or zeroes).
You seemed to accept Chris's explanation given that which you posted immediately after it (in
post #99), although you later expressed confusion in
post #109, which Chris cleared up in
post #110. (I address
below the above-mentioned argument which you raised in
post #111).
Paper #2 and theoretical versus empirical provability
I think it's also worth noting that one of the focusses of paper #2 is the
provability of the randomness of RNGs, where the sense of provability is theoretical and based on physical principles applicable to the system, rather than based on any sort of empirical testing after the device has been built. The paper focusses to a large extent on explaining why (theoretical) provability is not achievable in many cases, including for noise-based RNGs. However, this does not mean or imply that such devices do not exhibit very good randomness in practice, and the paper acknowledges that these devices often can and do pass empirical tests - tests which for practical purposes demonstrate their randomness, at least at the time of testing.
Because the RNGs used in the GCP have been tested in this way, and because they go through periods of calibration testing and have also been found overall not to deviate from chance expectation over the course of the experiment, theoretical provability seems to be a moot point. So, paper #2 likewise raises no red flags for the GCP.
Buried environmental signals
In
post #111, you suggested that an environmental signal could anyhow be buried in the data, a claim that you repeated in the much more recent
post #216 to me. I think the general type of hypothesis that's suggested by what you wrote in these posts is something like this:
Even though XORing removes systemic bias and ensures that the average remains at chance expectation, it remains possible that the momentary, second-by-second outputs of all RNGs are affected by common environmental signals such that they are momentarily all correlated.
This would seem to have to work as follows: some environmental signal which affects all RNGs simultaneously (let's say, for example, the Earth's magnetic field) causes at the physical level some bias in the raw bits of the RNGs which is similar across all RNGs, and then, after being XORed, this (momentary) shared, inter-RNG bias remains.
This would seem, though, to require that the same XOR mask is used for all RNGs when XORing the raw bits of each RNG, otherwise the bias would be affected by XORing in different ways at different RNGs, which in general would eliminate it as a shared bias (the biases at different RNGs would in general end up being different). This in turn means that XOR masks would have to be time-synchronised across all RNGs.
Here, then, is the problem: they're not. For a start, the two different types of RNGs use different XOR masks, and the experimental effects (inter-RNG correlations) are observed even between RNGs of both types. Secondly, even for RNGs of the same type, the offsets of the bitmasks are simply not time-synchronised across all RNGs - and certainly not with the precision that would be required for biases in the raw bits to survive XOR processing across all RNGs.
Peter Bancel discusses this in more detail in the section "The XOR Problem" between pages 10 and 11 of his 2014 paper,
An analysis of the Global Consciousness Project.
Too, this is presumably the essential mechanism which
any non-anomalous explanation of the GCP's results would use. The fact that neither Linda nor anybody else has suggested a way of overcoming it is why I remain confident in claiming that nobody has yet suggested a single possible non-anomalous hypothesis - that is, one which could clear this hurdle and others like it. Possibility is constrained by facts, and whilst in
some sense it is "possible" that the GCP results are due to, say, geomagnetic fields, given the constraining fact of unsynchronised XOR masks, this no longer seems possible. I'm still open to any non-anomalous hypothesis that anybody might be able to suggest which
accounts for all the facts, but nobody in this thread has yet provided one.
Finally: Linda claims in
post #225 that neither has an anomalous explanation met the requirements which I laid out in my original lengthy response in
post #213 (to which we can add this fourth requirement of defeating the unsynchronised XOR masks). This, though, is not true. For example, an anomalous explanation could be teleological: in other words, it could work backwards from its desired outcome of an ongoing momentary post-XOR correlation of RNG outputs; it would thus not, as is a non-anomalous explanation, be constrained to act blindly on the raw bits of RNGs, it could "arrange" such that momentary biases survived XOR processing in a way that a blind, physical process cannot.
A few other responses to Linda's most recent post:
(2018-10-30, 06:15 PM)fls Wrote: [ -> ]Why would I do that? That particular calculation is not valid with respect to understanding whether or not these results are remarkable. Again, it makes the mistake of applying a priori statistics to post hoc findings.
So, why did you even remark on the number of events in the extreme 10% then if you thought that assessing that number would not anyway be valid?
(2018-10-30, 06:15 PM)fls Wrote: [ -> ]Bem and Radin (as well as others) have been caught changing their hypotheses after the fact
Please let us know exactly what you're referring to.
Quote:Laird: [C]hanging [prespecified hypotheses] after the fact would amount to deliberate fraud.
Linda: No it wouldn't [because the (para)psychology community isn't leveling that charge].
That's a non sequitur: the definition of deliberate fraud does not entail anybody levelling any charge.
(2018-10-30, 06:15 PM)fls Wrote: [ -> ]Why do you think scientists in other fields don't take this seriously? Because they are aware that this is what researchers do
Oh? Can you justify that claim?
(2018-10-30, 06:15 PM)fls Wrote: [ -> ]Bancel's findings are consistent with fortuitous selection.
That is incorrect, for reasons detailed in my original lengthy response to you (again,
post #213).
[14] In
post #54, you'd (Max had) written that you'd "
done all this on Skeptiko, around April this year [2017]" and that you'd "
posted a couple of papers". Seven posts later, in
post #61, Chris identified the Skeptiko thread to which you were referring as
Closer to the Truth interviews with Josephson and Wolf. You didn't contest this so I assume that Chris was correct.
I looked through that thread and I think I've identified the
couple of papers to which you referred. In
post #26 in that Skeptiko thread, you linked to the paper
PUF-Based Random Number Generation (let's call this "paper #1"). Then, in
post #30 in that Skeptiko thread, you linked to the paper
True Random Number Generators, although the original link that you provided no longer works, so I had to do a bit of digging to locate it at the link I've provided (let's call this "paper #2").