Interview with Dr. Henry Bauer - Part 1

147 Replies, 17644 Views

(2017-10-26, 12:39 PM)Chris Wrote: It's an eternal mystery to me why people take you seriously.

That's an interesting conundrum. 

I think one possibility is that people learn a lot in these engagements. One of the strongest stimulants to engaging with a subject is to prove a hated opponent wrong. So regardless of whether or not anyone listens to me, people come out of it knowing more than they did going in. 

As a case in point, back in the statistical significance thread, I said something that Laird was so keen to prove was dumb, he found and read an article which he felt proved his point and PM'd the link to me. What he hadn't noticed in all that, was that the article outlined in graphical detail the truth of the statement, "smaller effects (decreasing power) increase the likelihood that that positive results are false-positives, even (especially) in the setting of very low p-values". So even though the public facade is "Linda is wrong and just won't own up to it", Laird now understands why what I was talking about is true.

I don't like that. I'm not fond of the vitriol, to be honest. But on the other hand, I bring these things up so that people are aware of them, not because I expect or want people to hang on what I say, or even agree with me. And I don't think some people would look into these subjects without your and others' vitriol. So I guess it's a good thing? Not that I want to throw myself under the bus, though. Wink

Linda
(This post was last modified: 2017-10-27, 02:09 PM by fls.)
[-] The following 3 users Like fls's post:
  • berkelon, malf, chuck
(2017-10-27, 02:06 PM)fls Wrote: As a case in point, back in the statistical significance thread, I said something that Laird was so keen to prove was dumb, he found and read an article which he felt proved his point and PM'd the link to me. What he hadn't noticed in all that, was that the article outlined in graphical detail the truth of the statement, "smaller effects (decreasing power) increase the likelihood that that positive results are false-positives, even (especially) in the setting of very low p-values". So even though the public facade is "Linda is wrong and just won't own up to it", Laird now understands why what I was talking about is true.

The fact that you're still trying to pretend you were right about that only underlines my point.

You'd rather mislead people than admit to getting something wrong. That's fatal.
[-] The following 1 user Likes Guest's post:
  • tim
(2017-10-27, 06:59 PM)Chris Wrote: The fact that you're still trying to pretend you were right about that only underlines my point.

You'd rather mislead people than admit to getting something wrong. That's fatal.

But that's the beauty of it - I can't mislead people. You're presuming (hoping) that nobody should take me seriously - that nobody should be listening to me in the first place. So if someone starts talking about the same thing I've been talking about all along, it means that they came to that understanding from looking at information independently and working it out. It couldn't possibly have come from me (cuz I'm full of shit, doncha know), therefore you don't have to worry about me 'misleading' anyone. 

It's kinda a win-win. 

(Although again, I don't want to give anyone the impression that I have noble motives, here. I'd still rather be treated with decency, rather than hostility, but there is a bright side to it for the forum.)

Linda
(2017-10-27, 08:11 PM)fls Wrote: But that's the beauty of it - I can't mislead people.

You seem to be trying your utmost.
[-] The following 1 user Likes Guest's post:
  • tim
This post has been deleted.
(2017-10-27, 02:06 PM)fls Wrote: As a case in point, back in the statistical significance thread, I said something that Laird was so keen to prove was dumb, he found and read an article which he felt proved his point and PM'd the link to me. What he hadn't noticed in all that, was that the article outlined in graphical detail the truth of the statement, "smaller effects (decreasing power) increase the likelihood that that positive results are false-positives, even (especially) in the setting of very low p-values". So even though the public facade is "Linda is wrong and just won't own up to it", Laird now understands why what I was talking about is true.

Firstly, I wasn't trying to prove that anything you said was "dumb" when I shared that article. In fact, I never used the word "dumb" of anything you'd said: it was you who called something I had said "dumb". And that's why I shared the article: to demonstrate that a public figure who seemed to have expertise had said essentially the same thing that I had said. You accepted this, but pointed out that in context, my original (public) comment which my (also public) comment in question had been clarifying might have been seen to have been implicitly committing the logical fallacy of affirming the consequent (even though strictly speaking, it was correct), and I accepted that it might.

Neither Chris nor I contested that the lower the power, the less likely low p-values are. We simply interpreted your statement differently: you didn't explicitly refer to power, only to "smaller effects", and both Chris and I understood this to mean "smaller effects with the same power" - which would have made your statement false. You eventually clarified that we had misinterpreted you, and that you had intended reduced power in proportion to the smaller effects.

I accept this. What I contest is that the original interpretation of your statement at which Chris and I arrived separately was unjustifiable. That is the only point of contention, because you do think that our interpretation was unjustified.

Given this, it is "interesting" that you have modified your original statement in the quote above. I have added emboldening and italics to the bits you have added:

"smaller effects (decreasing power) increase the likelihood that that positive results are false-positives, even (especially) in the setting of very low p-values"

Had you included the parenthetical comment "(decreasing power)" in your original statement, there would have been no possibility of Chris and I misinterpreting you.

So... not such a good "case in point".
[-] The following 2 users Like Laird's post:
  • tim, Doug
Laird

The point is that what fls said was smaller effect size, not smaller power. That was simply wrong, because it is power that determines the likelihood of a false positive, not effect size. Different experimental studies of a weaker effect could have smaller power than, the same power as, or larger power than an experimental study of a stronger effect.

But of course, as everyone agrees that studies need to be adequately powered, if we had to guess at what fls was implying, the sensible guess would have been that the power was the same. It's only common sense that more trials are needed to study a weak effect.

And yet here we are, weeks later, still arguing the toss - simply because she can't bear to admit she got it wrong.
Fair enough, Chris. I think that's a justifiable view, although I'm willing to extend Linda the benefit of the doubt and grant that she's being honest when she says she did mean that power decreased even though she only mentioned effect size decreasing (because that would be the case if the number of trials remained unchanged, which is what she claims her statement assumed).
(This post was last modified: 2017-10-27, 09:21 PM by Laird.)
Quote:We simply interpreted your statement differently: you didn't explicitly refer to power, only to "smaller effects"...

The statement I made, which you specifically quoted when you asked for Chris' comments, in the thread in which Chris posted his formulas, said:

"True positives are determined by the sensitivity of the test, which in the case of significance testing with p-values is something called "power". Sensitivity (power) tells us how many positive tests (significant studies) we can expect when the patent has the condition (there is psi). "Power" depends upon the size of the effect and the size of the study (and the sensitivity which we can ignore for now)."

http://psiencequest.net/forums/thread-90...ml#pid8515

Linda
(This post was last modified: 2017-11-13, 11:49 PM by fls.)
And this just illustrates the whole problem with trying to have any kind of reasonable discussion with fls

She made a statement that wasn't true. It was demonstrated mathematically to be false. But will she admit she got it wrong? Never in a million years!

Instead we just get endless spin, trying to make out she didn't actually mean what she said, there was something she'd implied, and it was our fault for not having guessed it, and so on and so forth ad nauseam. And in the end, she tries to play the victim, and claims we're trying to "defame" her. 

Just the same on this thread. There were some fairly non-contentious remarks about falsifiability of hypotheses. In she jumps, quoting the paper about "p" and "not p", "q" and "not q", misunderstanding pretty much the whole thing, trying to make a totally bogus distinction between looking at "p" being confirming and looking at "not q" being falsifying - when in fact there's no difference at all between them - and in the end the whole thread gets covered in confusion and tied in knots. Any chance of an acknowledgment she got it wrong? No chance at all! Just the usual stream of snide remarks and condescension, trying to make out it's everyone else's fault.

And I suspect she's just loving every minute of it...
[-] The following 1 user Likes Guest's post:
  • tim

  • View a Printable Version


Users browsing this thread: 1 Guest(s)