Interview with Dr. Henry Bauer - Part 1

147 Replies, 17663 Views

(2017-10-18, 01:05 PM)Laird Wrote: I don't see how this is, as Steve suggested, more scientifically rigorous. For example:


I don't get it. How is a test in isolation better than a comparative test?

When "remarkable correspondences" are what people use to say, "there must be psi", a study which shows that remarkable correspondences are produced when there is no psi is useful. I agree studies which have both conditions are usually even more useful. I suspect Steve's complaint was that there are very little of either of those in parapsychology.

Quote:No. Wiseman constructed an arbitrary criterion: if Jaytee went to the door at any time before Pam was returning home, then the trial was counted as a failure.

It doesn't matter what his criterion was, the study showed that there wasn't any sort of signal in JayTee's behavior which didn't also occur while Pam was not returning home (you can change the criterion to whatever you want and this still holds). And remember, it was Pam's parents' claim that there was a signal that led to the idea that JayTee knew when she was coming home, in the first place. 

Quote:No again. Rupert Sheldrake compared the scenarios of "Pam returning" to "Pam not returning" - and you have admitted that such a comparative test could be either described as "falsifying" or "confirmatory", because functionally both are identical - whereas...

This situation is different from what I described because their was no condition where a before and after comparison was made when Pam was not returning home. That is, Sheldrake only selected the point of Pam's return to form a before and after group. Then he compared those two groups and obtained the unsurprising result that the latter was larger. Unsurprising because in general the dog simply spent more and more time at the door, the longer Pam was gone. Had Sheldrake also chosen points unrelated to Pam's return in order to form before and after groups, he would have obtained the same result. 

Quote:....Wiseman weaselled his way around this by stipulating that if at any time prior to Pam returning home Jaytee went to the window, then the test was a failure. This is the height of biasing the odds in your favour. He at least accepted that the data he got in his experiment were comparable with the data Rupert Sheldrake got.

Yes, Wiseman agreed that his data also showed the general pattern that the dog went to the window more and more, the longer Pam was gone. But as I mentioned, this explains why you need a disconfirming test - when your data will produce a "positive" test regardless of whether there is psi or not, then a "positive" test doesn't tell you whether psi is true or not.

Quote:I can imagine it. Here's what I'm imagining: an utterly shitty test. Without a comparison to the scenario when Pam was returning home, it would be utterly stupid to conclude anything from solely the results when Pam was not returning home.

Sure you can. You can conclude that at least some of the time, Pam's parents were mistaken about the signal - it wasn't always associated with Pam's return.

Quote:Basically, I see no argument or evidence to back up Steve's claim that parapsychology has some unique difference to the rest of science with respect to "proving" rather than "falsifying". The only examples you, Linda, have given are poor ones, since they avoid comparisons - and comparisons are how we find out which alternative is more likely. You have admitted that where there is a comparison, the test cannot be definitively described as either "confirmatory" or "falsifying" because both are functionally identical - I would like you now to admit that non-comparative tests - where comparative tests are possible -  are inferior. Can you do that?

I'm sorry, but what are you going on about? I gave you examples, some of which are hypothetical, because I thought you wanted clear examples on the difference between a condition which confirms an idea and a condition which falsifies an idea, not because I thought any of them were ideal.

I did mention well-constructed control groups (for comparison) in my first post. I agree that well-constructed comparisons (which means that there is a comparison with results you would obtain in the absence of psi) are generally better than non-comparative studies. As I said, I suspect Steve's criticism comes about because parapsychology has very little of those. But I don't want to speak for him, so ask him what he meant. As I mentioned earlier, I wouldn't make a fuss about this a priori (except the lack of comparison with 'no psi' makes progress slow in parapsychology).

ETA: Try this instead...studies which include falsification are more rigorous.

Linda
(This post was last modified: 2017-10-18, 02:08 PM by fls.)
(2017-10-18, 01:28 PM)Laird Wrote: Fair enough, Chris, but you say that as something like an aside, right? Because I can't see how it bears directly on the proposition Steve made that parapsychology is unique in trying to "prove" rather than "falsify" hypotheses.

Yes - I was just slowly following my own train of thought about the role of falsifiability in psi research, and the fact that the hypotheses parapsychologists try to falsify are usually null hypotheses rather than psi hypotheses.
[-] The following 1 user Likes Guest's post:
  • Laird
(2017-10-18, 01:58 PM)fls Wrote: When "remarkable correspondences" are what people use to say, "there must be psi", a study which shows that remarkable correspondences are produced when there is no psi is useful. I agree studies which have both conditions are usually even more useful. I suspect Steve's complaint was that there are very little of either of those in parapsychology.

I suspect that you're putting lipstick on a pig when it comes to what Steve's claim was.

But is it really true that there are very few studies in parapsychology which have both conditions? I can think of categories of parapsychological experiments that include control sessions where there is (presumed to be) no psi: micro-PK tests with RNGs which are run when nobody is trying to influence them; "detection of staring" tests which include trials in which nobody is staring.

The main example I can think of in which this isn't the case is the Ganzfeld, but there seem to be others, such as card-guessing experiments. But let's say that in these cases, parapsychologists included control sessions with which to compare the live sessions - and let's say that statistically significant results were found in those sessions too: this would anyway indicate that something anomalous is going on in the control sessions too; hardly a "falsification" of the psi hypothesis!

I will not argue with you over the Jaytee experiments. You are obviously ignorant of the facts. It's quite striking just how ignorant. For example, you repeat the claim that was specifically tested and found to be false by Rupert Sheldrake in his original experiment, and which was documented in the paper he first published on these experiments: that the results can be explained by the hypothesis that as time went on, Jaytee went to the window more and more. Are you not embarrassed to write authoritatively on a matter about which you are so obviously uneducated?

An article written by Rupert Sheldrake clears all of this up: Richard Wiseman's claim to have debunked "the psychic pet phenomenon".

(2017-10-18, 01:58 PM)fls Wrote: I thought you wanted clear examples on the difference between a condition which confirms an idea and a condition which falsifies an idea

Well, I'm not sure how "clear" the only "definitive" example you've given is:

(2017-10-18, 12:26 PM)fls Wrote: imagine that Wiseman looked only at whether the signal was present when Pam was not returning home, for a definitive example of a disconfirming test.

To start with, it's not clear to me what the null and alternative hypotheses would be in this scenario - or if it's even possible to construct meaningful ones. But more to the point, let's say that somehow "a signal" was found by this test: this wouldn't "falsify" anything (in and of itself), because the possibility would remain that an even greater signal (with a statistically significant difference) would be found when Pam was returning home.

So, I make two suggestions: firstly, that "disconfirming" tests (by your (Linda's) definition) which cannot be reframed as functionally equivalent "confirming" tests are difficult to construct, and - based on the only "definitive" example you've given - apparently can't even provide conclusive results, and, secondly, in part following on from this, that Steve's claim that parapsychology isn't a real science because it doesn't perform "falsifying" tests is bunkum.
(This post was last modified: 2017-10-19, 02:10 AM by Laird.)
[-] The following 2 users Like Laird's post:
  • Reece, Doug
(2017-10-18, 02:13 PM)Chris Wrote: the hypotheses parapsychologists try to falsify are usually null hypotheses rather than psi hypotheses.

Isn't this a matter of semantics though? i.e. Couldn't you equally say that these tests are trying to falsify the alternative (psi) hypothesis by demonstrating that no significant effect exists and thus that the null cannot be rejected?

This is pretty much what I was getting at in my original reply to Steve to which Linda seemed to object: '"Proving" an idea scientifically is roughly the same as trying to falsify it and failing. Compare with "rejecting the null hypothesis"'.
(This post was last modified: 2017-10-19, 12:36 AM by Laird.)
(2017-10-19, 12:35 AM)Laird Wrote: Isn't this a matter of semantics though? i.e. Couldn't you equally say that these tests are trying to falsify the alternative (psi) hypothesis by demonstrating that no significant effect exists and thus that the null cannot be rejected?

This is pretty much what I was getting at in my original reply to Steve to which Linda seemed to object: '"Proving" an idea scientifically is roughly the same as trying to falsify it and failing. Compare with "rejecting the null hypothesis"'.

I don't think it's really semantics, because while it's usually obvious what the null hypothesis is, it's not obvious what the psi hypothesis should be. It's the same problem that occurs with applying Bayesian analysis to psi. You need to be able to calculate probabilities based on a psi hypothesis. But we don't know the first thing about psi, so we really can't calculate the statistics that would prevail if psi existed.

It's very difficult in our present state of knowledge to promulgate meaningful psi hypotheses - in the sense that their falsification would tell us anything useful. As things stand, falsification might just indicate that there's a necessary condition for psi that we don't understand - which wasn't met in the experiment in question.

.
[-] The following 1 user Likes Guest's post:
  • Laird
(2017-10-19, 12:52 AM)Chris Wrote: I don't think it's really semantics, because while it's usually obvious what the null hypothesis is, it's not obvious what the psi hypothesis should be.

OK, so, just to make sure I understand what you're saying: if it was obvious what the psi hypothesis should be, then you would agree after all with my suggestion that it's purely a matter of semantics as to whether one is trying to falsify the null or alternative hypothesis?
(2017-10-19, 01:08 AM)Laird Wrote: OK, so, just to make sure I understand what you're saying: if it was obvious what the psi hypothesis should be, then you would agree after all with my suggestion that it's purely a matter of semantics as to whether one is trying to falsify the null or alternative hypothesis?

Yes - if you've got two alternative hypotheses that give different predictions about something testable, then you can try to distinguish between them experimentally, and the same experiment will serve to falsify whichever of them is false (or maybe both of them).

But I still don't like the argument that trying to falsify a hypothesis and failing is roughly equivalent to proving it. It's not true of the null hypothesis, because it could be wrong by an infinitesimal amount, which your experimental tests were incapable of resolving. Or it could be wrong in ways that you just hadn't thought of testing for. And the same goes for any other hypothesis. Failure to disprove can't be equivalent to proof.
(2017-10-19, 01:32 AM)Chris Wrote: But I still don't like the argument that trying to falsify a hypothesis and failing is roughly equivalent to proving it. It's not true of the null hypothesis, because it could be wrong by an infinitesimal amount, which your experimental tests were incapable of resolving. Or it could be wrong in ways that you just hadn't thought of testing for. And the same goes for any other hypothesis. Failure to disprove can't be equivalent to proof.

You don't think all this is covered by the edit I made to my original post?: "of course, in science as it is currently practised, there is strictly speaking no such thing as "proof" anyway, only provisional degrees of confidence".
(This post was last modified: 2017-10-19, 01:50 AM by Laird.)
(2017-10-19, 12:24 AM)Laird Wrote: But is it really true that there are very few studies in parapsychology which have both conditions? I can think of categories of parapsychological experiments that include control sessions where there is (presumed to be) no psi: micro-PK tests with RNGs which are run when nobody is trying to influence them; "detection of staring" tests which include trials in which nobody is staring.

It depends upon whether the control is well-constructed - that is, the control conditions should be identical to the experimental conditions, except for the presence/absence of psi, rather than the presence/absence of staring (for example). It might help if you think of it as establishing an empirical baseline - measuring how well people detect staring when there is no psi. The obvious difficulty for some of these ideas is how to form a set-up where there is 'no psi'. 

Quote:The main example I can think of in which this isn't the case is the Ganzfeld, but there seem to be others, such as card-guessing experiments. But let's say that in these cases, parapsychologists included control sessions with which to compare the live sessions - and let's say that statistically significant results were found in those sessions too: this would anyway indicate that something anomalous is going on in the control sessions too; hardly a "falsification" of the psi hypothesis!

I assume that when you say "statistically significant" you are talking about rejecting a 'chance' hypothesis (like the Ganzfeld hypothesis that hits should be 25% due to chance)? It should be a falsification of a psi hypothesis, though. If significant results are found in both, it tells you that the results aren't due to psi (since the results are the same regardless of whether psi is present). Or at the least it narrows down what 'psi' could be (e.g. not telepathy or clairvoyance or precognition). And please note that falsifying a psi hypothesis is not saying that psi has been falsified. It's that we are narrowing down what 'psi' is or is not. 

Quote:I will not argue with you over the Jaytee experiments. You are obviously ignorant of the facts. It's quite striking just how ignorant. For example, you repeat the claim that was specifically tested and found to be false by Rupert Sheldrake in his original experiment, and which was documented in the paper he first published on these experiments: that the results can be explained by the hypothesis that as time went on, Jaytee went to the window more and more. Are you not embarrassed to write authoritatively on a matter about which you are so obviously uneducated?

I assure you that I am very familiar with the JayTee experiments, as well as the back and forth between Sheldrake and Wiseman. I am aware that Sheldrake performed an analysis which he claimed tested that hypothesis. I agree that it will not be productive to argue with you over whether the analysis served as a test of that hypothesis. However, simply for the sake of distinguishing between confirming and disconfirming hypotheses, I think we can agree that he does not perform the analysis I described earlier - comparing the before and after periods at points unrelated to Pam's return.

Quote:Well, I'm not sure how "clear" the only "definitive" example you've given is:

To start with, it's not clear to me what the null and alternative hypotheses would be in this scenario - or if it's even possible to construct meaningful ones.

The alternative and null hypotheses would be "JayTee does not give a signal during a 'no-return' period" and "JayTee gives a signal during a 'no-return' period at least some of the time".

Quote:But more to the point, let's say that somehow "a signal" was found by this test: this wouldn't "falsify" anything (in and of itself), because the possibility would remain that an even greater signal (with a statistically significant difference) would be found when Pam was returning home.

It falsifies the idea that the signal is exclusively associated with Pam's return. Which then leads you to test new hypotheses, like whether the signal is more likely to be found when Pam is returning home, which is progress. 

Quote:So, I make two suggestions: firstly, that "disconfirming" tests (by your (Linda's) definition) which cannot be reframed as functionally equivalent "confirming" tests are difficult to construct,

I don't think that's the case. The only example we had of two functionally equivalent statements weren't of a disconfirming test and a confirming test, but of both. For example, how could you frame the alternative and null hypotheses I gave above for a disconfirming test, as a confirming test?

Quote:and - based on the only "definitive" example you've given - apparently can't even provide conclusive results,

It seems to. I stated a conclusion which could be drawn.

Linda
[-] The following 1 user Likes fls's post:
  • Arouet
(2017-10-19, 01:50 AM)Laird Wrote: You don't think all this is covered by the edit I made to my original post?: "of course, in science as it is currently practised, there is strictly speaking no such thing as "proof" anyway, only provisional degrees of confidence".

Yes - fair enough. In practice, that kind of reasoning has to underlie the claims made for the laws of physics (that kind of reasoning plus the circular argument that any evidence to the contrary can be ignored because it contravenes the laws of physics).
[-] The following 1 user Likes Guest's post:
  • Laird

  • View a Printable Version
Forum Jump:


Users browsing this thread: 1 Guest(s)