Statistical Significance

65 Replies, 15148 Views

(2017-10-07, 12:56 PM)jkmac Wrote: I'm trying hard not to be rude right now,, because if you look at my post, I pretty clearly state that either one of us might need to be set straight. Confused 

Your wink doesn't change the fact that you are being passive aggressive the way I see it.
I'm sorry. I often make self-deprecating jokes, of which that was one, except I made the mistake of including you in the self-deprecation.

Yes, either of one of us might need to be set straight. That was a reference to who we tend to trust, based on our biases. If the person "with expertise in collecting and analyzing this sort of test data" who came to this forum was a proponent and agreed with you, that similarly wouldn't help me.

Linda
(2017-10-07, 01:54 PM)fls Wrote: I'm sorry. I often make self-deprecating jokes, of which that was one, except I made the mistake of including you in the self-deprecation.

Yes, either of one of us might need to be set straight. That was a reference to who we tend to trust, based on our biases. If the person "with expertise in collecting and analyzing this sort of test data" who came to this forum was a proponent and agreed with you, that similarly wouldn't help me.

Linda

I'll listen to anyone that I think is reasonable, objective, open-minded and will share evidence, even if it proves me wrong.  Sad
(This post was last modified: 2017-10-07, 02:31 PM by jkmac.)
[-] The following 1 user Likes jkmac's post:
  • tim
(2017-10-07, 01:15 PM)jkmac Wrote: Do you work at being this annoying or does it come naturally to you?

Is what I wrote not an accurate summation?
(2017-10-07, 02:42 PM)Steve001 Wrote: Is what I wrote not an accurate summation?

No actually, I would describe what you said is an annoying barb, not intended to advance the conversation in any way that I can discern.
(2017-10-07, 02:31 PM)jkmac Wrote: I'll listen to anyone that I think is reasonable, objective, open-minded and will share evidence, even if it proves me wrong.  Sad
Which explains why my joke went over like a lead balloon. Smile
(2017-10-07, 03:06 PM)fls Wrote: Which explains why my joke went over like a lead balloon. Smile

Yup. 

I've often said, this method of communication doesn't lend itself well to subtle humor. Not when the people don't know each other, their motivations, and styles.

So unfortunately I guess we will grind some gears occasionally.
Maybe it would be useful to post a form of Bayes's Theorem here as an equation. I think it's worth emphasising that this isn't complicated maths - in fact it's really very simple arithmetic. (After all, Bayes died more than 250 years ago, and he was a minister of religion, not a professional scientist.)

This is a form of the equation appropriate for hypothesis-testing. Suppose we have two alternative hypotheses, H0 and H1. If the hypotheses are well defined and the processes at work are well understand, then we can work out the probability of observing a particular experimental result given that hypothesis H0 is true, and similarly for H1. But what we'd really like to know is not those probabilities, but the probability that H1 is true, given that we've observed a particular experimental result.

To work this out we need to be able to estimate from other considerations - before we carry out the experiment - the probability that H0 is true and the probability that the alternative H1 is true. If we can do that, then simply by considering the probabilities of all the possible situations, it's straightforward to work out the probability we want in terms of the known or assumed quantities, as:

Probability of H1 given result = 
                                                                                   (Probability of result given H1) x (Probability of H1)
                                            ______________________________________________________________________________________
                                            (Probability of result given H1) x (Probability of H1) + (Probability of result given H0) x (Probability of H0)

I hope that equation comes out reasonably clearly. It does for me, but maybe it won't with other browers or with mobile devices. It can be written more concisely if we define some notation:

P(H1 | R) = P(R | H1) x P(H1) / (P(R | H1) x P(H1) + P(R | H0) x P(H0))

where P(H0) is the probability of Hypothesis 0 and P(A | B) means the probability of A being true given that B is true.
[-] The following 3 users Like Guest's post:
  • Oliver, stephenw, Laird
(2017-10-07, 02:53 PM)jkmac Wrote: No actually, I would describe what you said is an annoying barb, not intended to advance the conversation in any way that I can discern.

Quote:I can't help but think however, that your experience in medical testing is corrupting your expectations and assumptions in regard to testing this sort of system. I'm suggesting that how one must interpret psi test data might be quite different due to the nature of the thing. I can't make any pronouncements to that fact, it's just an instinct I have.
Since I'm at a misunderstanding what precisely do you mean?
Certainly when the experimental result relates to a medical diagnostic test, and we're trying to distinguish between the hypotheses that (i) the patient has the condition that's being tested for and (ii) doesn't have it, an application of Bayes's theorem tells us exactly what we need to know - the probability that the patient has the condition given a positive test result (or doesn't have it given a negative test result). All the information that's needed can reasonably be estimated - the assumed probabilities of the two hypotheses will come from an estimate of the prevalence of the condition in the population, and the probability of a positive test result in each case will come from data on the previous use of the test on other patients.

The trouble with applying the same equation to psi experiments is that we don't have any objective way of estimating the prior probability that psi is real. And James Randi's subjective estimate of that probability is likely to be very different from Uri Geller's. Not only that, but we also need to be able to work out the probability of obtaining any given experimental result if psi does exist, and that requires a model of psi. Even if we assume psi is a perfectly well behaved phenomenon, we still have to make an assumption about the effect size, or about the range of effect sizes. And it may well be that psi isn't at all well behaved. If there's an important experimenter effect, for example, it may not be valid to assume successive trials are independent, and that would play havoc with the statistics.

Given those difficulties, I don't feel Bayesian techniques are of much use at all in interpreting most psi experiments, and I don't really understand why some parapsychologists embrace them to the extent they do.
[-] The following 5 users Like Guest's post:
  • Oliver, Kamarling, laborde, Laird, Typoz
But supposing someone pressed on regardless, and applied the formula to a typical psi experiment, they could make H0 the null hypothesis, in which psi doesn't exist, and H1 some kind of psi hypothesis, and they could dream up prior probabilities P0 and P1 to assign to these two hypotheses. 

Then, as usual, they could characterise the result of the experiment using some variable y, and they could choose a small number (call it a) as a significance level, so that "success" would be a result in which y was so large that such a value would be obtained with probability a if the null hypothesis were true. They could design their experiment by choosing another number less than 1 (call it b) as the power, so that the experiment would be successful with probability b if the psi hypothesis were true.

They could use an equation similar to the one above, with the set of results considered being those for which the experiment was a success. Swapping H0 and H1 in the equation above, they could use it to estimate the probability that the null hypothesis was true despite the experiment being a success (i.e. a "false positive"). In the equation, the probability of the result given H0 would simply be a, and the probability of the result given H1 would simply be b. 

Then after a bit of rearrangement, they would get:

Probability of null given success = 
                                                                        1
                                                          _________________
                                                          1 + b x P1 / (a x P0)

Probability of psi given success =
                                                             b x P1 / (a x P0)
                                                          _________________
                                                          1 + b x P1 / (a x P0)

So to comment on Laird's point above, both the significance level (in other words the p value) and the power - as well as the prior estimates P0 and P1 - influence the estimates of the likelihood of psi and no-psi. In fact, as the power is likely to be a number somewhat less than 1, but not small, whereas the significance level may be quite small, the probabilities are likely to be more sensitive to the significance level than to the power.
[-] The following 2 users Like Guest's post:
  • stephenw, Laird

  • View a Printable Version
Forum Jump:


Users browsing this thread: 3 Guest(s)