Principles of Curiosity

78 Replies, 15236 Views

(2017-10-06, 11:00 AM)jkmac Wrote: It doesn't feel to me that there is a sneaky problem of false positives sitting here in from of us that has alluded detection all these years.
Just an FYI - the "sneaky problem of false positives" is the problem of the failure to replicate, which has been discussed for years (in many fields - not singling out parapsychology here).
Linda
(2017-10-06, 01:49 PM)fls Wrote: Just an FYI - the "sneaky problem of false positives" is the problem of the failure to replicate, which has been discussed for years (in many fields - not singling out parapsychology here).
Linda

We've discussed that already. And like it or not,,, this seems to be a fact of life with these phenomenon. They are not consistent, and people need to deal with it.

I know I'm not a professional researcher, but I am a professional engineer, and I don't believe that the results in test number N+1 negate the results found in test N.

OTOH- if multiple test runs with the same procedure, show data, that when combined with meta analysis techniques negate each other, now you have something to talk about. 

But I don't think that's what we see in the cases in question, is it? We see indicative results, which vary in magnitude, some of which support the claim and others which do not, but when taken in aggregate show non-trivial support of the claims. 

In other words- if the data in conflict with the claim is not significant enough to reduce the P enough to negate the claim, the effect is demonstrated. Right? 

Two steps forward one back, still results in forward motion by one step.
(2017-10-06, 02:07 PM)jkmac Wrote: We've discussed that already. And like it or not,,, this seems to be a fact of life with these phenomenon. They are not consistent, and people need to deal with it.

I know I'm not a professional researcher, but I am a professional engineer, and I don't believe that the results in test number N+1 negate the results found in test N.

OTOH- if multiple test runs with the same procedure, show data, that when combined with meta analysis techniques negate each other, now you have something to talk about. 

But I don't think that's what we see in the cases in question, is it? We see indicative results, which vary in magnitude, some of which support the claim and others which do not, but when taken in aggregate show non-trivial support of the claims. 

In other words- if the data in conflict with the claim is not significant enough to reduce the P enough to negate the claim, the effect is demonstrated. Right? 

Two steps forward one back, still results in forward motion by one step.
So as not to derail this thread, I posted my reply in the Statistical Significance thread Chris mentioned earlier.

http://psiencequest.net/forums/thread-90...ml#pid8515

Linda
(2017-10-06, 10:21 AM)fls Wrote: The article I pointed to earlier which explains this is:

http://rsos.royalsocietypublishing.org/c...1/3/140216

For the reasons I've already explained, I don't believe it's true that "smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values" as you suggested, so I very much doubt that paper claims that.

If you can see a statement to that effect in the paper, by all means quote it. But I suggest you do so on the appropriate thread, not here:
http://psiencequest.net/forums/thread-90.html
(2017-10-06, 08:25 AM)Laird Wrote: Chris, this is obviously a matter for your own conscience, but I would be disappointed if you saw something misleading and felt compelled to refrain from pointing it out. That seems to me to be an overall loss to the forum.

I don't really know what the answer is, if someone wants to post on a public forum, but doesn't want to have any discussion with a particular member of the forum.

Apart from anything else, it's maybe not the best use of time to get into pointless arguments of the kind that have happened in the past. I hope people here don't tend to take things on authority from anonymous posters anyway, whoever says them.
[-] The following 2 users Like Guest's post:
  • Kamarling, Obiwan
(2017-10-06, 07:03 PM)Chris Wrote: I don't really know what the answer is, if someone wants to post on a public forum, but doesn't want to have any discussion with a particular member of the forum.

Apart from anything else, it's maybe not the best use of time to get into pointless arguments of the kind that have happened in the past. I hope people here don't tend to take things on authority from anonymous posters anyway, whoever says them.
I don't think anyone needs to regard this as remarkable. It's not a condition of membership that we are obliged to engage with everyone here, regardless of whether anything productive can come of it. Some members don't engage with me. Some members don't engage with other members. Who cares?

Realistically, these things are decided by belief anyways.

Linda
(2017-10-06, 09:43 PM)fls Wrote: I don't think anyone needs to regard this as remarkable. It's not a condition of membership that we are obliged to engage with everyone here, regardless of whether anything productive can come of it. Some members don't engage with me. Some members don't engage with other members. Who cares?

I was simply trying to be considerate by not responding to your posts, as you'd said you didn't wish to communicate with me. If you don't care whether I respond to them or not, I'll feel free to do so. If you prefer me not to, I won't - but in that case it will obviously make things easier if you don't respond to discussions I'm already involved in.
(2017-10-06, 06:29 PM)Chris Wrote: For the reasons I've already explained, I don't believe it's true that "smaller effects increase the likelihood that that positive results are false-positives, even in the setting of very low p-values" as you suggested, so I very much doubt that paper claims that.

If you can see a statement to that effect in the paper, by all means quote it. But I suggest you do so on the appropriate thread, not here:
http://psiencequest.net/forums/thread-90.html

Just a final word on this (on this thread). After a bit more thought, I think the answer is that there's no consistent relationship between the effect size and the likelihood of a false positive, other things (notably the p value) being equal. I think it's possible to construct examples where the effect size is smaller and the likelihood of a false positive is bigger, but also examples where the effect size is smaller and the likelihood of a false positive is smaller. It will depend on the values of the parameters. When I can find the time I'll try to post some equations on the other thread.
(2017-10-06, 10:00 PM)Chris Wrote: Just a final word on this (on this thread). After a bit more thought, I think the answer is that there's no consistent relationship between the effect size and the likelihood of a false positive, other things (notably the p value) being equal. I think it's possible to construct examples where the effect size is smaller and the likelihood of a false positive is bigger, but also examples where the effect size is smaller and the likelihood of a false positive is smaller. It will depend on the values of the parameters. When I can find the time I'll try to post some equations on the other thread.

As I understand it when the effect size is small it is that much more important to have a sufficiently powered study. Isn't a reason for this that the smaller the sample the more likely a small effect is going to be a false positive?
(2017-10-06, 10:10 PM)Arouet Wrote: As I understand it when the effect size is small it is that much more important to have a sufficiently powered study. Isn't a reason for this that the smaller the sample the more likely a small effect is going to be a false positive?

As I said, I think we need equations. I'll try to post some on the other thread when I get a chance.

  • View a Printable Version
Forum Jump:


Users browsing this thread: 2 Guest(s)