Statistical Significance

65 Replies, 15209 Views

Just an FYI.
 
When someone talks about the effect of a variable, they are talking about what happens when you change that variable without changing any other independent variables. 

Linda
fls

I'm afraid if you need several attempts to work out how your Delphic utterances should be interpreted, the rest of us stand very little chance of guessing correctly.

Why not just try to explain things clearly to start with?
[-] The following 1 user Likes Guest's post:
  • Iyace
(2017-10-09, 09:38 PM)Chris Wrote: fls

I'm afraid if you need several attempts to work out how your Delphic utterances should be interpreted, the rest of us stand very little chance of guessing correctly.

Why not just try to explain things clearly to start with?

I have been clear all along.

I said, "smaller effects increase the likelihood that that positive results are false-positives." 

I didn't say, "smaller effects increase the likelihood that positive results are false positives even when the sample size is increased." I didn't say, "smaller effects increase the likelihood that positive results are false positives even when the sample size and alpha are increased." So it was clear that I was not referring to smaller effects in the setting of changing the other variables in order to keep the power the same. I even said as much in another post where I specifically mentioned the effect of power.

You were the one who added those inappropriate assumptions.

Linda
(This post was last modified: 2017-10-09, 10:37 PM by fls.)
fls

As I said, thank goodness for equations!
(2017-10-09, 11:26 PM)Chris Wrote: fls

As I said, thank goodness for equations!

I have to agree with that. Otherwise, I doubt that we would have agreed on the effect of power on false positives.

Linda
(2017-10-10, 01:20 AM)fls Wrote: I have to agree with that. Otherwise, I doubt that we would have agreed on the effect of power on false positives.

And equations even seem to have overcome your unwillingness to reply to my posts. Only a few days ago you wouldn't talk to me. Now you won't stop talking to me.  Surprise

I'll try to post some more equations later today ...
(2017-10-10, 07:52 AM)Chris Wrote: And equations even seem to have overcome your unwillingness to reply to my posts. Only a few days ago you wouldn't talk to me. Now you won't stop talking to me.  Surprise

I'll try to post some more equations later today ...

You should be aware that Linda will never allow you the last word.
I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension.
Freeman Dyson
(2017-10-10, 09:38 PM)Kamarling Wrote: You should be aware that Linda will never allow you the last word.

Clever.
(2017-10-09, 05:20 PM)fls Wrote: Looking at your statement, "you wouldn't be able to achieve a certain p value unless you had the statistical power to do so in the first place"...

Yeah, that was badly expressed - such a black and white case is rare: it's usually about likelihood rather than "enough". i.e. Generally, assuming the effect is real, then, the greater the statistical power, the more likely you are to get a given small p value (or smaller). But in rare cases (e.g. very few trials, or aiming for an extremely low p value) certain p values are simply unattainable - i.e. you don't have "enough" statistical power to achieve them at all, no matter how many times you repeat the experiment - and these are the (very rare) cases for which my original statement is - I think! - accurate (again, though, I welcome anybody to step in and correct me if I'm wrong - I am by no means an expert on statistics).

In any case, this was really beside the point Chris had made, and probably didn't need to be said.
(This post was last modified: 2017-10-13, 04:00 AM by Laird.)
J. E. Kennedy has posted a new paper on his site, written with Caroline Watt, entitled "How to Plan Falsifiable Confirmatory Research". (A note explains that it was submitted to two journals but rejected, once because a referee wanted the paper rewritten to express a more favourable view of his chosen statistical philosophy, and once because it was deemed not to be novel.)
https://jeksite.org/psi/falsifiable_research.pdf

The main recommendation is that studies should be designed to use a 0.05 significance level and a power of 0.95. That means that they would be capable either of confirming the experimental hypothesis or falsifying it - if the null hypothesis were true, the probability of a false positive would be 5%, while if the experimental hypothesis were true, the probability of a false negative would also be 5%.

That's an appealing idea in a way, but there is a drawback where psi research is concerned. For one thing, the researcher has to fix a minimum effect size of interest in order to design the falsifying aspect. But also, more fundamentally, the method depends on modelling the statistical behaviour that would be expected if psi exists (just as the Bayesian approach does). It assumes psi would be a well-behaved phenomenon in which the results of individual trials would, in statistical terms, be identically distributed and independent of one another. 

Of course, the experimental data on psi are often inconsistent with these assumptions. Sceptics may take that as evidence of questionable research practices, but logically it could just as well be attributed to a failure of the assumptions about psi. If psi is characterised as an interaction between mind and environment that can't be explained by known physical laws, it's not obvious why - for example - successive trials with the same subject should be statistically independent. 

The traditional (frequentist) approach of testing the null hypothesis has the great virtue that under the null hypothesis the statistics are known, so the procedure is logically consistent and requires no assumptions. Admittedly, some kind of model of psi has to be used to estimate the numbers when designing experiments, because there's no alternative. But I think it's hazardous to go beyond that, and conclude - on the basis of a model of psi - that psi doesn't exist, when it may just be that psi doesn't fit the model.
[-] The following 3 users Like Guest's post:
  • Oliver, Kamarling, Silence

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)