Psience Quest

Full Version: The Global Consciousness Project
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
OK, so, let's talk magic.

We have a bunch of hats in front of us. We would expect all of them to be empty in the absence of magic. But, we choose [edit: on the basis that they looked like they were owned by wizards] a bunch of them without first looking inside them and hypothesise in advance that we'll find a bunch of rabbits, and - presto! - we end up pulling a bunch of rabbits out of our hats! Strange thing is, when we look inside all of the hats that we didn't choose, we find very, very few rabbits [edit: or maybe not so strange - after all, those hats didn't look like they were owned by wizards!].

Our hat selection was flexible, but is that where the magic really happens...?...

Chris

(2019-01-18, 05:37 AM)malf Wrote: [ -> ]But I was specifically talking about the flexibility for event selection, not ‘the (non) hypothesis for each event’.

And we know from Bancel that that is where the ‘magic’ happens.

The "flexibility" refers to the fact that - especially in the early period of the experiment - they tried different statistics for different events.

It doesn't mean that for each individual event the statistic wasn't fixed in advance of looking at the data. What they say is that the definition of the statistic, and of course the times covered by the event, were always fixed before they looked at the data.

This is what I wrote in the very first post of this thread:

But for evidential purposes, the significant data are those produced by the "Registry of Formal Hypotheses and Specifications". According to the organisers of the project, for each of a sequence of 513 events in the period 1998-2015, a statistical hypothesis was specified before the data were examined, and was then tested. In subsequent analysis about a dozen of these events were excluded because the hypotheses were poorly defined, or not defined before any of the data were seen, but for the 500 classified as "rigorously defined", the cumulative Z value was 7.31, corresponding to a p value of 1.333 x 10^-13.

http://global-mind.org/results.html

As far as I'm aware, that result remains totally unexplained by sceptics. The hypotheses were stated to be pre-specified - that is, specified before the data were examined. The specification wasn't just a vague hypothesis - it was a specific statistical test that would yield a definite Z value for the event. And it was stated that all the pre-specified events would be included, so there would be no "publication bias" in the results.

Sceptics have criticised certain post hoc analyses of particular events, such as 9-11, which in principle is fair enough. But obviously those criticisms don't address the formal registry, for which the hypotheses are stated to have been decided in advance. And sceptics tend to dismiss the whole project as a post hoc fishing expedition, which proves only that they haven't bothered to look at the protocol.
(2019-01-18, 02:53 AM)Laird Wrote: [ -> ]Linda, I notice that you ignored the most significant part of my post: the explanation based on unsynchronised XOR masks as to why we seem to be able to rule out a non-anomalous hypothesis. You are of course free to do that, I just want to note it.

You quoted and directed that part to MaxB. It didn't occur to me to jump in to that discussion.

Quote:That doesn't answer my question.

It was meant to. Commenting on the distribution of events within their threshold for "extreme" events would be relevant. Looking at how a shift in the curve changes the number of events within that threshold would not.  

Quote:Perhaps I can give you a better idea of what I mean by "exactly". It might look something like this:

You list, in your response in this thread, the experiments in which you claim that Bem, Radin, and others have been caught changing their hypotheses after the fact. For each of your claims, you state the prespecified hypothesis, quoting it exactly with a checkable reference, and then you state the changed, after-the-fact hypothesis, quoting it exactly with a checkable reference. Then you quote directly, with a checkable reference, the person who "caught" this change. If there was a response to the accusation by the researcher(s) in question, then you directly quote that response with a checkable reference.

That is, of course, exactly what was offered to you in the Feeling the Future thread I referenced. And to a less detailed degree on the Radin blog - you have to dig a little on your own to see that the experiment was based on a prior experiment which reported on treated vs. untreated groups, and that his power calculations were based on this comparison (not on the comparison he eventually reported - treated vs. untreated groups only amongst the small subset of believers). I think there would be difficulty finding another example which fits your fortuitously capricious requirements, given that the practice is common (and therefore gives the appearance of acceptability) and I have yet to see a proponent parapsychologist call out their colleagues for doing so. It seems to depend upon whether or not a non-proponent takes the time to look at a particular paper, for that problem to be mentioned. And of course, as has been mentioned numerous times here and elsewhere, it's hard to get non-proponents to take the time to look at research which is assumed to be bollocks a priori (unfortunately).

Quote:There are two questions here:
  1. Does changing a prespecified hypothesis after the fact amount to deliberate fraud?
  2. Do (para)psychologists regard such a thing as deliberate fraud?
Can you please confirm that you accept that the answer to the first is "Yes" even though you continue to answer "No" to the second?

The answer to the first is "no". Changing a prespecified hypothesis after the fact is excused as "following the evidence", "looking at those groups/conditions in which the experiment worked", "it was a pre-planned condition" based on recollection, etc. Unless you have some sort of documentation as to the hypothesis, recorded prior to the results being obtained/known, anything can be claimed as the hypothesis of interest. And the researchers may even sincerely recall that that was their idea all along - we all know the problem of how our recollection is overwritten once we are given feedback.

If you want to know what the pre-specified hypothesis was, without having to depend upon a researcher's self-interest, you can look at the research which the study is based on, you can look at what hypothesis the power calculation was based on, and you can look at study registries.

Quote:So, there are basically three parts to your claim:
  1. Parapsychologists change prespecified hypotheses after the fact.
  2. Scientists in other fields are aware of this.
  3. It is because of this awareness that scientists in other fields don't take parapsychology seriously.
We might add the implied qualification to the first part: that this is done enough that it generally invalidates the results of the field.

That paper doesn't even begin to justify the first part of your claim: reading the abstract, it deals with the field of psychology, not of parapsychology - the word "parapsychology" does not even appear in the paper.

I said, "because they are aware that this is what researchers do, even if they are not supposed to." I wasn't referring to Paraspsychology, otherwise I would have said, "because they are aware that this is what parapsychologists do, even if they are not supposed to." I was referring to researchers in general. Because researchers in any field (particularly the social sciences) know that the use of QRP's are fairly ubiquitous (even sometimes using them themselves), if given the chance, they are skeptical of research where the use of QRP's could create the supposed effect.

Quote:I doubt it. I've tried it in three scenarios: logged in under my ordinary administrative account, logged in under a test, non-admin account, and not logged in. It takes me to the same (correct) post each time.

I wasn't criticizing you. It didn't work for me, and I was just explaining why I didn't know which claim you wanted me to address. I don't think it matters, because I don't recall that you ever addressed the fortuitous selection problem that Bancel found.

Quote:If in that context by "fortuitous" you mean something like "serendipitously guided by some anomalous force, entity, or other phenomenon" - which I think is the sense in which Peter Bancel intends his explanation - then I can accept the possibility of "fortuitous selection" after all.

"Fortuitous" means "event selection which worked out well for the researchers".

Quote:The sense of "fortuitous selection" which I think has been ruled out in this experiment is something like "occurring by a random process of blind luck". The post to which I linked explained one way by which it has been ruled out: a resampling analysis in 2008 demonstrated that results with the same level of significance could be obtained only once in 100,000 attempts (2008 was well before the experiment ended, so the likelihood is that the figure would have been even higher by the end).

To this, you responded: "I don't think anyone disagrees that the event samples are improbable under random sampling. The question is whether they would also be improbable if samples were drawn for other goals". The problem with this, as I discussed at length in the post to which you were responding, is that it implies that there is some causal mechanism at work which relates the goals (selection criteria) to an effect - but we seem to be able to rule out non-anomalous causal mechanisms, and you agree that we can rule out blind luck, so we continue to be left then with the one hypothesis that we can't rule out: that the results were obtained anomalously (including the possibility of "fortuitous selection" in the sense of "serendipitously guided by some anomalous force, entity, or other phenomenon").

I don't know why you think non-anomalous causes have been ruled out. The tests you have mentioned previously have been inadequate to rule out all non-anomalous causes (including the most obvious ones), and have even, in some cases, been inadequate to rule out the cause they purport to be testing.

Quote:And, again, this is all based on the conditional: if everybody involved is being basically honest, then how can we explain these results? Any argument for dishonesty is a separate issue.

If everybody involved is as honest as everyone else, then the results can be explained as defensible flexibility.

Linda

Chris

(2019-01-18, 02:53 AM)Laird Wrote: [ -> ]Perhaps I can give you a better idea of what I mean by "exactly". It might look something like this:

You list, in your response in this thread, the experiments in which you claim that Bem, Radin, and others have been caught changing their hypotheses after the fact. For each of your claims, you state the prespecified hypothesis, quoting it exactly with a checkable reference, and then you state the changed, after-the-fact hypothesis, quoting it exactly with a checkable reference. Then you quote directly, with a checkable reference, the person who "caught" this change. If there was a response to the accusation by the researcher(s) in question, then you directly quote that response with a checkable reference.

I am guessing that the evidence being referred to in that Bem thread is in this post by me:
https://psiencequest.net/forums/thread-c...3#pid14003

It appears from the original presentation at a conference that in Bem's experiment 5, the images were originally viewed as six groups, classified firstly as negative, neutral and positive, and secondly as low and high arousal. Only one group - the negative, high-arousal images - produced significant results. So in the results section, Bem said that, "after the fact", the other five groups could be amalgamated and presented as control trials. That is how they were presented in the subsequent paper, and the only hypothesis mentioned was the one concerning negative, high-arousal images.

It's not so much that there's a change in the hypothesis, as that the results for the groups that didn't show a significant change haven't been reported (or rather have been reported as controls). As far as I'm concerned, these negative results should have been included in the "File drawer" section of Bem's paper. Of course, a complication is that he may not have expected some of the groups to give significant results. In the original conference presentation he says "For my own precognitive studies, however, I wanted to use stimuli that are strongly arousing, reasoning that repeated exposure to such stimuli would produce affective habituation: Negatively arousing stimuli would subsequently be experienced less negatively and positively arousing stimuli would be experienced less positively." That would suggest his focus was on two of the six groups, one of which gave significant results and the other of which didn't.
Whoops. Seems I've made the mistake of thinking I could reopen this discussion without having to deal with a bunch of semantic obfuscation, manipulation, and (wilful?) obtuseness. (It's not that every point made is totally without merit, but too many of them are for it to be worth continuing. Thanks anyway).
(2019-01-19, 01:04 AM)Laird Wrote: [ -> ]Whoops. Seems I've made the mistake of thinking I could reopen this discussion without having to deal with a bunch of semantic obfuscation, manipulation, and (wilful?) obtuseness. (It's not that every point made is totally without merit, but too many of them are for it to be worth continuing. Thanks anyway).

Lol. I'm also not sure why you reopened this discussion, given that you were going to do what you always do - call me (and presumably malf for daring to point out that what the researchers call "formal hypotheses" are no such thing) names as soon as you weren't able to address the points I/we raised. At least you got to it faster, this time.

Linda
I guess it's not surprising that lies and spin are condescendingly employed with a little "lol" to defend the extent to which the earlier lies and spin had been effective (not at all) and to attempt to turn a legitimate calling-out of mendacious game-playing into a petty need to insult. Without the support of the facts, some turn to tactics like these. What is surprising is for how long this game is and has been played - without apparent fatigue. Why and to what end? Fascinating questions.
(2019-01-19, 04:28 AM)Laird Wrote: [ -> ]I guess it's not surprising that lies and spin are condescendingly employed with a little "lol" to defend the extent to which the earlier lies and spin had been effective (not at all) and to attempt to turn a legitimate calling-out of mendacious game-playing into a petty need to insult. Without the support of the facts, some turn to tactics like these. What is surprising is for how long this game is and has been played - without apparent fatigue. Why and to what end? Fascinating questions.

"Lies and spin" ??? What a bunch of BS. I've never lied or tried to "spin" anything. Nor do I play games. And I happen to find the name-calling quite tiresome.

Why do I bother to engage, knowing what I'll be met with? Good question. 

Linda
(2019-01-19, 04:58 AM)fls Wrote: [ -> ]"Lies and spin" ??? What a bunch of BS.

Linda, the BS is all from you. Utter lie: "you weren't able to address the points I/we raised". I've spent many, many words addressing in excruciating detail the points you've raised, only to have you come back and either ignore what I've written or spin it away. It's not "name-calling" to point this out; it's a necessary part of withdrawal from the conversation, to make it clear that and why meaningful dialogue has become impossible.

Chris

(2019-01-18, 08:34 AM)Chris Wrote: [ -> ]The "flexibility" refers to the fact that - especially in the early period of the experiment - they tried different statistics for different events.

It doesn't mean that for each individual event the statistic wasn't fixed in advance of looking at the data. What they say is that the definition of the statistic, and of course the times covered by the event, were always fixed before they looked at the data.

This is what I wrote in the very first post of this thread:

But for evidential purposes, the significant data are those produced by the "Registry of Formal Hypotheses and Specifications". According to the organisers of the project, for each of a sequence of 513 events in the period 1998-2015, a statistical hypothesis was specified before the data were examined, and was then tested. In subsequent analysis about a dozen of these events were excluded because the hypotheses were poorly defined, or not defined before any of the data were seen, but for the 500 classified as "rigorously defined", the cumulative Z value was 7.31, corresponding to a p value of 1.333 x 10^-13.

http://global-mind.org/results.html

As far as I'm aware, that result remains totally unexplained by sceptics. The hypotheses were stated to be pre-specified - that is, specified before the data were examined. The specification wasn't just a vague hypothesis - it was a specific statistical test that would yield a definite Z value for the event. And it was stated that all the pre-specified events would be included, so there would be no "publication bias" in the results.

Sceptics have criticised certain post hoc analyses of particular events, such as 9-11, which in principle is fair enough. But obviously those criticisms don't address the formal registry, for which the hypotheses are stated to have been decided in advance. And sceptics tend to dismiss the whole project as a post hoc fishing expedition, which proves only that they haven't bothered to look at the protocol.

I wonder if malf could indicate whether he can see any aspect of the statistical analysis which he feels wasn't completely specified in advance - according to what Roger Nelson says - which the rest of us have missed.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31