Psience Quest

Full Version: How to find valid information?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
(2017-11-03, 04:43 PM)Silence Wrote: [ -> ]Let's start here: What method do you advise a layman, such as myself, employ?

As I believe I've made clear by now, I think the question of proper authority is a very difficult issue and made even more difficult when authorities opine on topics outside their expertise.  As a layman, I really have no idea whom to believe and tend to default to the consensus.

I see two different issues and I don't know if they need separate threads, or if they can be covered by one.

What to think about proposed threats to seemingly established sciences (the alt-science views on evolution, cosmology, etc.).

How to get a sense of the validity of seemingly fringe ideas (like parapsychology, cold fusion, etc.).

With respect to proposed threats to seemingly established sciences:

Look to mainstream sources of science information - textbooks in general use, articles in good science magazines (e.g. Scientific American - if you want to know which magazines generally offer good science reporting (e.g. Wired or The New York Times Sunday Magazine) look at The Best Science, and The Best Science and Nature Writing series), educational resources from the main professional body of the relevant scientific field, governmental and NGO organizations formed to advise the government on scientific matters, educational materials from well-regarded academic institutions.

Where are the proposed threats coming from - within the field or from without? Who is taking it seriously - does it show up in the subject lists of the main conferences within the field, in the main research journals in the field, in articles from mainstream sources? Try to get sense of whether those people with the most knowledge and experience regard it as valid.

I make these particular recommendations based on my own experience with medicine. With respect to the various controversies over the years which I've encountered - some valid, some not - these practices would allow most laypeople to be able to identify which threats were valid and which were not.

I think a good discussion could be had on this, as I'm pretty sure some people here feel pretty strongly that this is the wrong way to go about it. 

More difficult is how to get a sense of seemingly fringe ideas:

The problem I see is that public proponents (of the idea) tend to over exaggerate the support for the idea (they almost have to defend themselves against the invariable criticisms). And public critics seem to get dismissive and overly protective of the mainstream view. I'm not always comfortable that those people who become the public face of the debate are inclined to reasonably represent the debate, on either side.

What I like to find, if I can, are criticisms from proponents of the idea and support from hardened skeptics of the idea - people who are speaking against their biases. Alternatively, I try to identify personal sources of information who have expertise on the subject and who I know I can trust to give a reasoned representation. Sometimes I have a "test case" if I am trying to decide if an authority can be trusted. I see if they have offered an opinion on a subject I know very, very well, and see if they get it right.

I'd be very interested in other people's ideas on how to approach this.

Linda

Chris

I think psi is very different from most controversial areas of science. There's no intrinsic requirement for advanced specialist knowledge to judge whether an experiment provides evidence for the existence of psi. A basic knowledge of statistics is usually all that's needed - so anyone can play.
I've had many experiences that have given me confidence in my own analytical ability and my ability to form opinions myself without having to rely on authorities.

When I was in graduate school I had several different situations, class projects, thesis research etc where I had to do sufficient research to become one of the worlds foremost experts on a narrow topic. As an undergrad, I had devised and carried out a simple experiment to answer a question that the head of the lab where I worked hadn't been able to answer. She walked past my bench as I was tabulating the results, immediately saw what I was doing and it blew her mind.  When I worked in the private sector I had similar experiences doing things as an engineer that people told me they thought could not be done. Also improving/correcting mistakes by other vendors about their own products. Once I was working alone in competition with a group of 10 engineers. I won. They were not surprised because I told their project leader exactly what I would do. Then I did it a second time on another project.

Much of science is like groping in the dark - experiments designed to be definitive give ambiguous results.

When you are used to the "experts" being wrong or confused, authority looses it cachet. 

For this reason I don't really think authority is much help to laypeople trying to make a decision. Where authority is useful is in justifying an opinion. If experts disagree and there are experts that believe something you do, that could help you to have enough confidence to stick by your own determinations in the face of "activists" and political attacks trying to get you to change your mind or shut up.

Yes there are a lot of quacks pushing foolishness. But some quacks are simply more subtle than others.
(2017-11-05, 05:58 PM)Jim_Smith Wrote: [ -> ]I've had many experiences that have given me confidence in my own analytical ability and my ability to form opinions myself without having to rely on authorities.

When I was in graduate school I had several different situations, class projects, thesis research etc where I had to do sufficient research to become one of the worlds foremost experts on a narrow topic. As an undergrad, I had devised and carried out a simple experiment to answer a question that the head of the lab where I worked hadn't been able to answer. She walked past my bench as I was tabulating the results, immediately saw what I was doing and it blew her mind.  When I worked in the private sector I had similar experiences doing things as an engineer that people told me they thought could not be done. Also improving/correcting mistakes by other vendors about their own products. Once I was working alone in competition with a group of 10 engineers. I won. They were not surprised because I told their project leader exactly what I would do. Then I did it a second time on another project.

Much of science is like groping in the dark - experiments designed to be definitive give ambiguous results.

When you are used to the "experts" being wrong or confused, authority looses it cachet. 

For this reason I don't really think authority is much help to laypeople trying to make a decision. Where authority is useful is in justifying an opinion. If experts disagree and there are experts that believe something you do, that could help you to have enough confidence to stick by your own determinations in the face of "activists" and political attacks trying to get you to change your mind.

Yes there are a lot of quacks pushing foolishness. But some quacks are simply more subtle than others.

Sure, you can easily find 'authorities' who will say whatever you want to hear - you can find people who tell you the earth is flat, that HIV doesn't cause AIDS, that evolution needs a god, that positive thinking will heal your cancer, that climate change isn't our fault, etc. etc.

And sure, lots of people are happy to regard themselves as clever in order to forego years and years of study and practice.

But this is about where to get valid information, not how to convince yourself that you "don't need no stinkin' experts". I have yet to see examples where someone who lacks adequate knowledge and experience does better than those who do not. Can you give me some examples of your determinations that you think are valid (in the area of medicine, if possible)? I'm curious as to whether there is any justification for your confidence, or if we're just looking at yet another example of this:

http://psycnet.apa.org/doiLanding?doi=10....77.6.1121

Linda

Chris

(2017-11-06, 12:14 AM)fls Wrote: [ -> ]I'm curious as to whether there is any justification for your confidence, or if we're just looking at yet another example of this:

http://psycnet.apa.org/doiLanding?doi=10....77.6.1121

That's interesting. The abstract says:
"Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd."

The full paper is available here:
http://psych.colorado.edu/~vanboven/teac...unning.pdf

Just looking at the figures, essentially people in each quartile (including the top one) on average placed themselves in the second quartile.
An extreme example of Dunning Kruger.
http://www.internationalskeptics.com/for...p?t=286808
Jefferyw has videos on YouTube.

Chris

(2017-11-06, 09:04 AM)Chris Wrote: [ -> ]That's interesting. The abstract says:
"Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd."

The full paper is available here:
http://psych.colorado.edu/~vanboven/teac...unning.pdf

Just looking at the figures, essentially people in each quartile (including the top one) on average placed themselves in the second quartile.

Actually, I wonder a bit about the methodology of that paper. It's plausible enough that people with low cognitive ability wouldn't be capable of evaluating that ability accurately, and would overestimate it. But why should those with high cognitive ability underestimate it? 

Could this just be the result of the tendency of different tests to give different results, rather than a bias predominantly associated with low ability? Wouldn't we expect to see the same thing if we compared the results of two different tests of ability? If the results differed, then some of those in the bottom quartile for Test A would be in higher quartiles for Test B, so the average percentile would be higher and it would look as though Test B was overestimating the ability of the bottom quartile for Test A. And conversely underestimating the ability of the top quartile. And if the results differed a lot, then plotting average percentile for Test B against quartile for test A would give a flattish line, as seen in the paper. Combine that with a natural tendency for everyone (not just the least able) to overestimate their ability, and we could get something very similar to what's in the paper.

I can't see that the data they present proves very much without a consideration of that kind of effect. Quickly scanning the paper I couldn't see any discussion of it, but maybe it's there and I missed it.
(2017-11-05, 09:28 AM)Chris Wrote: [ -> ]I think psi is very different from most controversial areas of science. There's no intrinsic requirement for advanced specialist knowledge to judge whether an experiment provides evidence for the existence of psi. A basic knowledge of statistics is usually all that's needed - so anyone can play.

Statistics seem to be very open to interpretation and even abuse.  They do however provide useful information for the thinker to reason with.  The problem is that it can be too easy to take them at face value.  I think that if statistics appear to suggest something, then that is grounds for further research but on their own, they hardly constitute science.

Chris

(2018-05-24, 08:23 PM)Brian Wrote: [ -> ]Statistics seem to be very open to interpretation and even abuse.  They do however provide useful information for the thinker to reason with.  The problem is that it can be too easy to take them at face value.  I think that if statistics appear to suggest something, then that is grounds for further research but on their own, they hardly constitute science.


I think statistics are indispensable as a scientific method, but obviously statistical tests need to be interpreted sensibly rather than applied in a blind, mechanical way. I think in many/most parapsychology experiments, the evidence will inevitably be statistical in its nature. That doesn't mean it can't be conclusive in practice, if it's strong enough.
(2018-05-24, 09:31 PM)Chris Wrote: [ -> ]I think statistics are indispensable as a scientific method, but obviously statistical tests need to be interpreted sensibly rather than applied in a blind, mechanical way. I think in many/most parapsychology experiments, the evidence will inevitably be statistical in its nature. That doesn't mean it can't be conclusive in practice, if it's strong enough.

Do you know if there are any guidelines about how statistics are generally used in science and to what degree consensus is achieved among scientists with regards interpretation of statistics?
Pages: 1 2