Principles of Curiosity

78 Replies, 15264 Views

This post has been deleted.
(2017-10-05, 05:23 PM)malf Wrote: Follow the subsequent exchange between Chris and me where we arrive at some consensus of the issues. Also check out the 'pushing false positives' link I posted #39.

With P factors as low as they point to, I would agree. The Radin (for example) work that I am familiar with are not. 

Am I missing some notable claims he is making on very low P results?
I have no problem in admitting that I have not a clue when it comes to understanding the technicalities of P values, etc. Still less in admitting that it leaves me stone cold. Indeed, I have often thought that the work of Radin, well intentioned that it may be, is also of limited value when considering the nature of these phenomena. I have my doubts about whether such phenomena lend themselves to this kind of precise experimentation.

One thing that has always surfaced when considering statistical evidence is the question: so what? It seems to my uneducated mind that all they are doing is trying to measure a probability against chance. The odds against something happening by chance. But even if the odds are quite high that something could have happened by chance doesn't prove that it did. Can you prove intention using statistics?

Even when the odds against chance are astronomically high, scientists still insist that we go with chance as the only possible cause. It seems to me that is precisely what has happened with evolution theory, especially with the origin of life and the DNA molecule. So dogmatists will claim the matter is settled if the slightest glimmer of a chance is present, no matter how insignificant, if it supports their current worldview.
I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension.
Freeman Dyson
[-] The following 1 user Likes Kamarling's post:
  • nbtruthman
(2017-10-05, 05:55 PM)malf Wrote: There is some discussion around this in the comments.
I agree that Novella may not be clear on the issue (especially since he specifically states that he had to ask someone else about it).

The comment by Jay explains the difference between the prior probability, and the probability distribution on which the Bayes factor is calculated, and the extent to which either is subjective. And the posts that David Colquhoun links to further down in the comments are also helpful in terms of explaining why frequentist statistics don't measure the false positive rate, even though that is how they are interpreted. That is, Radin's "6-sigma significance" is usually taken to mean that there is a "6-sigma chance that these results are a false-positive". And this is where effect size becomes relevant, as smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values (because the probability of producing a positive result when the alternative hypothesis is true is also very low).

Linda
(2017-10-05, 09:15 PM)fls Wrote: I agree that Novella may not be clear on the issue (especially since he specifically states that he had to ask someone else about it).

The comment by Jay explains the difference between the prior probability, and the probability distribution on which the Bayes factor is calculated, and the extent to which either is subjective. And the posts that David Colquhoun links to further down in the comments are also helpful in terms of explaining why frequentist statistics don't measure the false positive rate, even though that is how they are interpreted. That is, Radin's "6-sigma significance" is usually taken to mean that there is a "6-sigma chance that these results are a false-positive". And this is where effect size becomes relevant, as smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values (because the probability of producing a positive result when the alternative hypothesis is true is also very low).

Linda

Just an observation. You said on another thread that you didn't want to talk to me. I have respected that, as I said I would, by not responding to your comments, even when I saw something misleading.

But please be clear that won't apply if I'm having a discussion and you intervene in it.
(2017-10-05, 11:16 PM)Chris Wrote: Just an observation. You said on another thread that you didn't want to talk to me. I have respected that, as I said I would, by not responding to your comments, even when I saw something misleading.

But please be clear that won't apply if I'm having a discussion and you intervene in it.
I'm sorry. I meant my post to be relevant to jkmac/Malf's comments on small effects sizes per the several very useful posts/links in the comments section in the article Malf posted. I wasn't paying enough attention to who said what in the the thread. 

Linda
Well, I would just say two things.

It's true that some people misunderstand what the p value means (and occasionally some people who do understand it misdescribe it). But it does have a clear and simple meaning - the probability of getting the observed result or a more extreme one by chance, if the null hypothesis is true - so I don't think it's reasonable to criticise its use just because some people misunderstand it.

Regarding the suggestion that if the effect size is lower a false positive is more likely, even for a given p value, that doesn't seem correct to me. Of course it depends on the same assumptions that all Bayesian probability calculations depend on. But given an observed Z value, if the experimental hypothesis is simply based on a fixed effect size corresponding to that Z value, then as far as I can see the probability of a false positive depends only on the p value and on the assumed prior probability that the hypothesis is true, and not on the effect size. I'll be happy to be corrected if I'm wrong, but as this doesn't really have anything to do with Brian Dunning or his film, probably the best place for this discussion is the existing thread on statistical significance:
http://psiencequest.net/forums/thread-90.html
[-] The following 2 users Like Guest's post:
  • jkmac, Laird
(2017-10-05, 11:16 PM)Chris Wrote: Just an observation. You said on another thread that you didn't want to talk to me. I have respected that, as I said I would, by not responding to your comments, even when I saw something misleading.

Chris, this is obviously a matter for your own conscience, but I would be disappointed if you saw something misleading and felt compelled to refrain from pointing it out. That seems to me to be an overall loss to the forum.
(2017-10-06, 07:53 AM)Chris Wrote: Well, I would just say two things.

It's true that some people misunderstand what the p value means (and occasionally some people who do understand it misdescribe it). But it does have a clear and simple meaning - the probability of getting the observed result or a more extreme one by chance, if the null hypothesis is true - so I don't think it's reasonable to criticise its use just because some people misunderstand it.

Regarding the suggestion that if the effect size is lower a false positive is more likely, even for a given p value, that doesn't seem correct to me. Of course it depends on the same assumptions that all Bayesian probability calculations depend on. But given an observed Z value, if the experimental hypothesis is simply based on a fixed effect size corresponding to that Z value, then as far as I can see the probability of a false positive depends only on the p value and on the assumed prior probability that the hypothesis is true, and not on the effect size. I'll be happy to be corrected if I'm wrong, but as this doesn't really have anything to do with Brian Dunning or his film, probably the best place for this discussion is the existing thread on statistical significance:
http://psiencequest.net/forums/thread-90.html
The article I pointed to earlier which explains this is:

http://rsos.royalsocietypublishing.org/c...1/3/140216

Linda
(2017-10-06, 07:53 AM)Chris Wrote: Well, I would just say two things.

It's true that some people misunderstand what the p value means (and occasionally some people who do understand it misdescribe it). But it does have a clear and simple meaning - the probability of getting the observed result or a more extreme one by chance, if the null hypothesis is true - so I don't think it's reasonable to criticise its use just because some people misunderstand it.

Regarding the suggestion that if the effect size is lower a false positive is more likely, even for a given p value, that doesn't seem correct to me. Of course it depends on the same assumptions that all Bayesian probability calculations depend on. But given an observed Z value, if the experimental hypothesis is simply based on a fixed effect size corresponding to that Z value, then as far as I can see the probability of a false positive depends only on the p value and on the assumed prior probability that the hypothesis is true, and not on the effect size. I'll be happy to be corrected if I'm wrong, but as this doesn't really have anything to do with Brian Dunning or his film, probably the best place for this discussion is the existing thread on statistical significance:
http://psiencequest.net/forums/thread-90.html
I'm thinking same Chris. It doesn't feel to me that there is a sneaky problem of false positives sitting here in from of us that has alluded detection all these years. That is: other than if the probability of an event happening (the P value) is simply not significant enough, and we are too willing to attach larger significance to it than we should.

Moral (I think) is be cautious of probabilities that are not reasonably separated from chance.

But here's the dirty little secret:
I know by my experience that there is a dynamic that exists that is not directly associated with material physics. So when I see data that indicates this, even if it's not super strong, I am willing to accept it. There are of course others who have no such fundamental understanding, and they unsurprisingly are apt to be more critical of marginal data. And they will tend to further test and analyse, until the data "normalizes" as they think it should. And it seems that marginal data, takes very little "force" to nudge one way or the other, through explanation, rationalization, further analysis, test methodology, or whatever, until they conform to someone's expectations. 

This may be totally garbage thinking, but it sort of feels like the way things work.
(This post was last modified: 2017-10-06, 11:02 AM by jkmac.)

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)