Psience Quest

Full Version: The Global Consciousness Project
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Chris

(2017-09-15, 08:53 PM)Max_B Wrote: [ -> ]Sorry Chris, 1-4 that is pretty much what the paper says... but it means absolutely nothing to me, and doesn't make any sense... remember you are talking to a layman... the rest means nothing either. I'm interested in how bias from a noise-based device could get into the data analysis presented by GCP?

As far as I can see, such a signal does get into the data... because they are certainly not adding 200 long chunks of XORed equally distributed bits (which would always give them sum of 100) and uploading that figure to the GCP database... because that would be pointless... we would simply get a database containing time stamped 100's every second from every RNG in the network. A database containing billions of the number: 100 I'm sure tells them nothing... so despite what you have previously said, this cannot be what they are doing.

Below are three truncated lines of raw Comma Separated Values (CSV) data taken from the GCP the database... what they contain is not clearly explained...  the first line starts with some sort of field headers, the significance of the numbers to the right is unknown, but I suspect they are the reference number for each RNG in the GCP network.

The second and third lines clearly start with a timestamp, date and time corresponding to the headers in the line above, then a truncated stream of numbers which seem to be falling above and below the 100 figure, I assume these are a series of the summed 200 equally biased bit's (100 'zeros' and 100 '1s') that you referred to that would remove all bias from the RNG device.

But what is absolutely clear, is that these numbers cannot be the sum of 200 equally balanced 0's and 1's, because if they were, all we would see is 100 for each Comma Separated Value...

12,"gmtime","Date/Time",1,28,37,100,101,102,105,106,108,110,111,112,114,115,116,119,134,161,226,228,231,1004,1005,1021,1022,1025,1026,...

13,1102294861,2004-12-06 01:01:01,111,106,97,93,93,100,116,103,91,88,94,103,85,94,94,99,100,102,103,97,89,114,91,93,100,96,,100,89,103,...

13,1102294862,2004-12-06 01:01:02,95,105,127,106,94,105,100,100,96,99,88,98,101,107,95,103,106,101,105,102,96,95,94,99,101,107,88,100,...


So whatever you said before Chris, that these 200 bits taken from RNG are equally balanced 0,1's by XORing can't be right... I can see, and anybody looking can see, higher and lower deviations from the '100' we should expect...

So how do we get to these numbers above, which are taken from the GCP database. According to you that is not possible, because the sum of 200 XORed bits should always equal 100?

I think I see the difficulty.

What I was doing before with my example bit sequence was trying to demonstrate that, regardless of the input, after XORing with a balanced mask, the expected numbers of 0s and 1s would become equal. That is, if we average the numbers of 0s and 1s over all the possible positions of the mask, those two average numbers will be equal.

That's not to say that for each of the possible positions of the mask individually, the numbers of 0s and 1s will be equal. They won't in general (and they weren't in the example I made up). So in the GCP data, for a particular bitstream produced by noise and a particular position of the mask relation to that bitstream, in general the numbers of 0s and 1s won't be equal. So in general when 200 bits are added up the answer won't be 100. 

But if we consider the average value of the sum of the 200 bits - that is, averaged over all the possible bitstreams and all the possible positions of the mask - then the numbers of 0s and 1s must come out equal. So the average value of the sum is 100, and the XORing has overcome the bias to produce the right average - the average which would be produced if the bitstreams were behaving ideally. (And the average would still be right no matter how badly behaved the input bitstream was. But if the bitstream was very badly behaved, other features of the frequency distribution, such as the variance, wouldn't be close to their ideal values.)

[Edited to add: Your interpretation of the extracts from the database is correct. The first line has a sequence of reference numbers for the RNGs whose data are included in the file. Each of the other lines contains data for one second, and the values are the sums of 200 bits for the RNGs specified in the first line.]

Chris

Well, thanks again for looking at that. It was never to be expected that we'd get to the bottom of it in this thread, after all.

Regarding further investigations by the GCP people - my impression is that it's just Roger Nelson running things now. The network is still operating (albeit with a smaller number of RNGs than it had at its peak last decade) and there is an active Facebook page, where they are continuing to look at individual events. But the formal pre-registered series has ended, so I think it's being done on a purely post hoc basis now. Roger Nelson is also planning to publish a book on the project.

I think Peter Bancel feels he's nailed it down to an experimenter psi effect, and has moved on to other stiudies. He gave a talk as part of the 2017 PARAMOOC course, which I think contains analyses in addition to those he has published. I presume that's still available to view for free after a registration procedure.
https://carlossalvarado.wordpress.com/20...mooc-2017/

I started to do a bit of analysis a while ago, but there really are a lot of data to handle. I got as far as breaking down the contributions into hourly totals for each pair of RNGs, with the idea of trying to see if any subset of the data can be identified as the source of the effect. I haven't had a chance to do anything on that for a while, though. I'll try to make a bit more progress if I can get a chance.
(2017-09-15, 11:56 PM)Max_B Wrote: [ -> ]So the suggestion I made right back at the start, that this signal could be something that is also correlated with something to do with human behavior - energy usage - to which these devices are vulnerable is absolutely valid. *If* we accept that the researchers are choosing major events to analyse in a fair, unbiased way, then that looks to be a possible reason to consider.
I think you are on the right track in several ways, but I am unconvinced that the "hidden message" (if there is one) lies in energy usage. 

I mean, look at logistically how that would manifest. It would be about people getting up and turning on the AC or putting on the kettle for a cup of tea. In my mind, that is MUCH to gross of an input mechanism for a system like this. There isn't enough nuance, enough complexity, enough resolution, in that type of design. 

Also as I said some time ago, energy usage translates mostly to demand, not noise. If power companies are doing there job well (admittedly not always the case, nobody's perfect after all) they will vary generating capacity to match demand. If that is managed well, there will be no real power quality change. 

So yes, I believe there is quite possibly signal embedded in the noise, but I don't think it is due to demand because that just doesn't fit correctly with the characteristics I'm seeing. 

The good news however, it really doesn't matter the source of the signal, just so we can extract it. 

But to your point (I think), understanding the way the signal is injected into the data stream, would help immensely in decoding the message...  

OK so finally on this point, my intuition tells me this:
it may be harder to figure out how the signal is injected, and easier, (via trial and error combined with brute force engineering and data analysis techniques) to tease the signal out of the noise. And maybe when we get a clearer look at the characteristics of the signal, we can back our way into how it gets there in the first place. That's that way many very complex engineering/science problems have been solved in my past experience. The idea being: to grab onto whatever slim thread of coherent information and can find, and gently, through whatever means you can dream up (direct cause and effect don't need to be figured out or relied on at this stage), untangle it to the point where it will sit still long enough for close inspection. Then, from the close inspection, you can glean the data necessary to be smarter about your approach. This is where cause and effect ARE figured out, and can now play a part in shaping/informing a smarter, more appropriate system design.

That's how I look at it purely from an engineering/system design standpoint. Hope it makes sense to others.

Chris

One odd feature of the GCP results is that Bancel analysed the correlations between pairs of RNGs according to the type of device. The results, in Table 1 of this paper -
https://www.researchgate.net/publication...xploration - are shown below.

[Image: REGPairs.jpg]
The differences between the three kinds of pairs are not statistically significant, but it's interesting that the correlation is weakest between the pairs of Mindsongs, and strongest between mixed pairs, where one device is Mindsong and the other is Orion.

Thinking about the scope for statistical artefacts, I would have guessed that the scope was probably greatest for the pairs of Mindsongs (because of the complicated XOR mask and the fact that only part of the mask was used for each sample of bits), and that the scope would be weakest for the mixed pairs.
(2017-09-16, 11:41 AM)Chris Wrote: [ -> ]One odd feature of the GCP results is that Bancel analysed the correlations between pairs of RNGs according to the type of device. The results, in Table 1 of this paper -
https://www.researchgate.net/publication...xploration - are shown below.

[Image: REGPairs.jpg]
The differences between the three kinds of pairs are not statistically significant, but it's interesting that the correlation is weakest between the pairs of Mindsongs, and strongest between mixed pairs, where one device is Mindsong and the other is Orion.

Thinking about the scope for statistical artefacts, I would have guessed that the scope was probably greatest for the pairs of Mindsongs (because of the complicated XOR mask and the fact that only part of the mask was used for each sample of bits), and that the scope would be weakest for the mixed pairs.
That makes initiative sense to me.

Did they identify geography as well in these data?

Can you help those of us who are less familiar with the nomenclature of statistics, to relate these figures, and visualize how much of a correlation there is in terms of probability (ie- 1:100, 1:1000 etc) or some other more "accessible" way for a layman?

Chris

(2017-09-16, 11:52 AM)jkmac Wrote: [ -> ]Did they identify geography as well in these data?

Can you help those of us who are less familiar with the nomenclature of statistics, to relate these figures, and visualize how much of a correlation there is in terms of probability (ie- 1:100, 1:1000 etc) or some other more "accessible" way for a layman?

Yes - the longitude and latitude of each RNG should be available at the GCP website. You can also see a map here (they tend to be most concentrated in North America and Europe):
http://noosphere.princeton.edu/egghosts.html

The correlations are very weak (as indicated by 10^-5 at the top of the column), but as there are billions of data points, they turn out to be statistically significant (apart from the Mindsong-Mindsong). The corresponding p values (according to this online calculator - http://www.socscistatistics.com/pvalues/...ution.aspx - selecting the one-tailed test) are:
Mindsong-Mindsong, p = 0.067 (not significant at p = 0.05, partly because the numbers of Mindsongs are smaller than the numbers of Orions).
Orion-Orion, p = 0.00034.
and for the others the p value is so small it returns only p < 0.00001. I'd have to find a better calculator, but obviously those values are extremely statistically significant.

Chris

(2017-09-16, 01:12 PM)Max_B Wrote: [ -> ]You could go mad trying to find patterns in the existing GCP data Chris.

We all go a little mad sometimes. Horror

But seriously, if it's a statistical artefact, it shouldn't be a hopeless task trying to track down where it's coming from. If it's psi effect I agree that looking at subsets of the data is unlikely to tell us much (or much more than Bancel's analysis already has told us).

Chris

(2017-09-16, 02:50 PM)Chris Wrote: [ -> ]Yes - the longitude and latitude of each RNG should be available at the GCP website. You can also see a map here (they tend to be most concentrated in North America and Europe):
http://noosphere.princeton.edu/egghosts.html

Peter Bancel looked at whether the strength of correlations between pairs of RNGs decreased with the distance between them. He found it did for "minor" events, but didn't for "major" events (though obviously there's an element of subjectivity in this classification).
Is there any evidence that other digital/electronic devices malfunction at emotional moments?
(2017-09-18, 04:45 AM)malf Wrote: [ -> ]Is there any evidence that other digital/electronic devices malfunction at emotional moments?

That's what I was thinking too... though I reckon it would be even more complicated to track down glitches in god knows how many different devices.

But yeah, it seems like if RNGs go bananas en masse during certain events we could expect similar behavior from computers of all sizes.

cheers.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31