Statistical Significance

65 Replies, 15195 Views

Incidentally, the first form of Bayes's Theorem in the last post is helpful in examining the suggestion that "smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values". If the p value (i.e. a in the equation above) is the same, and the prior probabilities assigned to H0 and H1 are the same, then effect size can influence the likelihood of a false positive only through the power of the experiment (b above). 

If the small-effect-size experiment is designed to have the same power as the large-effect-size experiment, the likelihood of a false positive will not be any greater.
[-] The following 1 user Likes Guest's post:
  • Laird
(2017-10-07, 06:00 PM)Chris Wrote: Incidentally, the first form of Bayes's Theorem in the last post is helpful in examining the suggestion that "smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values". If the p value (i.e. a in the equation above) is the same, and the prior probabilities assigned to H0 and H1 are the same, then effect size can influence the likelihood of a false positive only through the power of the experiment (b above). 

If the small-effect-size experiment is designed to have the same power as the large-effect-size experiment, the likelihood of a false positive will not be any greater.

Thanks, Chris. That claim did seem mistaken to me. I appreciate you laying out all of that groundwork to explain why it is mistaken. Intuitively, even without that groundwork, what you say makes sense too - it doesn't matter what the effect size is so long as you have the requisite statistical power; stated another way: you wouldn't be able to achieve a certain p value unless you had the statistical power to do so in the first place, regardless of (or, rather, given) the effect size.

Not sure how clearly that comes across, so feel free to restate it in clearer terms or correct it.
[-] The following 1 user Likes Laird's post:
  • Doug
(2017-10-09, 04:28 PM)Laird Wrote: Thanks, Chris. That claim did seem mistaken to me. I appreciate you laying out all of that groundwork to explain why it is mistaken. Intuitively, even without that groundwork, what you say makes sense too - it doesn't matter what the effect size is so long as you have the requisite statistical power; stated another way: you wouldn't be able to achieve a certain p value unless you had the statistical power to do so in the first place, regardless of (or, rather, given) the effect size.

Not sure how clearly that comes across, so feel free to restate it in clearer terms or correct it.

You may have misunderstood something. Chris confirmed what I described. I'm not sure where you are getting "mistaken".

Looking at your statement, "you wouldn't be able to achieve a certain p value unless you had the statistical power to do so in the first place"...that describes the question we are asking. The p-value tells you "how likely am I to achieve a result this extreme or more, when the null is true?" You can also ask "how likely am I to achieve a result this extreme or more, when the alternative hypothesis is true?" Sometimes, the power tells you that this is also unlikely - that is, the results were unlikely under the null and under the alternative hypotheses. You can't necessarily assume that the p-value you happened to produce in a particular study reflects the result of the power. 

Linda
(This post was last modified: 2017-10-09, 05:57 PM by fls.)
[-] The following 1 user Likes fls's post:
  • Arouet
(2017-10-09, 05:20 PM)fls Wrote: You may have misunderstood something. Chris confirmed what I described. I'm not sure where you are getting "mistaken". 

Linda

I thought the same thing...
(2017-10-09, 05:20 PM)fls Wrote: You may have misunderstood something. Chris confirmed what I described. I'm not sure where you are getting "mistaken". 

Astonishing.
(2017-10-09, 05:47 PM)Chris Wrote: Astonishing.

Not really.
(2017-10-09, 06:00 PM)Arouet Wrote: Not really.

No. Clearly those equations don't back up what fls said on the other thread - that "smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values". If you keep the p value the same and reduce the effect size, then the false-positive probability can either go up, or go down, or stay the same. It depends on the power. If the power is the same, the false-positive probability is the same.

Nor, in my view, do they back up what fls said on this thread about the importance of the power and the unimportance of the p value. Clearly, they both play a role.

Anyway, the virtue of simple equations like those is that they are quite simple to interpret. They are far more efficient than verbal descriptions in a situation like this.
[-] The following 1 user Likes Guest's post:
  • stephenw
This post has been deleted.
Just an FYI.

When someone talks about the effect of a variable, they are talking about what happens when you change that variable without changing anything else

Linda
(2017-10-09, 08:06 PM)fls Wrote: Just an FYI.

When someone talks about the effect of a variable, they are talking about what happens when you change that variable without changing anything else

You need to decide whether you want to talk to me or not.  Wink

But thank you for clarifying. As I've just pointed out, if the effect size is reduced, leaving the power unchanged, then the false-positive probability is also unchanged.
[-] The following 1 user Likes Guest's post:
  • Typoz

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)