Post by auntym on Mar 24, 2015 12:16:18 GMT -6
www.space.com/28912-mistakes-people-make-when-arguing-science.html?cmpid=514648
The 10 Mistakes People Make When Arguing Science
by Will J Grant, Australian National University and Rod Lamberts, Australian National University
March 24, 2015
This article was originally published on The Conversation. The publication contributed this article to Space.com's Expert Voices: Op-Ed & Insights.
UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make.
Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?
Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.
1. Wait! That’s just one study!
You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.
If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.
The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.
People who blindly accepted Andrew Wakefield’s (now retracted) study - when all the other evidence was to the contrary - fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.
2. Significant doesn’t mean important
Some effects might well be statistically significant, but so tiny as to be useless in practice.
You know what they say about statistics?
Credit: Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND
Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.
One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.
The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.
3. And effect size doesn’t mean useful
We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.
We can flip this around and use what is called Number Needed to Treat (NNT).
In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.
4. Are you judging the extremes by the majority?
Biology and medical research are great for reminding us that not all trends are linear.
We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.
But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.
The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.
5. Did you maybe even want to find that effect?
Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.
There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.
In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.
CONTINUE READING: www.space.com/28912-mistakes-people-make-when-arguing-science.html?cmpid=514648
The 10 Mistakes People Make When Arguing Science
by Will J Grant, Australian National University and Rod Lamberts, Australian National University
March 24, 2015
This article was originally published on The Conversation. The publication contributed this article to Space.com's Expert Voices: Op-Ed & Insights.
UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Understanding what’s being said in any new research can be challenging and there are some common mistakes that people make.
Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things?
Well maybe you’re smart and didn’t make any mistakes – but more likely you’re like most humans and accidentally made one of these 10 stuff ups.
1. Wait! That’s just one study!
You wouldn’t judge all old men based on just Rolf Harris or Nelson Mandela. And so neither should you judge any topic based on just one study.
If you do it deliberately, it’s cherry-picking. If you do it by accident, it’s an example of the exception fallacy.
The well-worn and thoroughly discredited case of the measles, mumps and rubella (MMR) vaccine causing autism serves as a great example of both of these.
People who blindly accepted Andrew Wakefield’s (now retracted) study - when all the other evidence was to the contrary - fell afoul of the exception fallacy. People who selectively used it to oppose vaccination were cherry-picking.
2. Significant doesn’t mean important
Some effects might well be statistically significant, but so tiny as to be useless in practice.
You know what they say about statistics?
Credit: Flickr/Frits Ahlefeldt-Laurvig, CC BY-ND
Associations (like correlations) are great for falling foul of this, especially when studies have huge number of participants. Basically, if you have large numbers of participants in a study, significant associations tend to be plentiful, but not necessarily meaningful.
One example can be seen in a study of 22,000 people that found a significant (p<0.00001) association between people taking aspirin and a reduction in heart attacks, but the size of the result was miniscule.
The difference in the likelihood of heart attacks between those taking aspirin every day and those who weren’t was less than 1%. At this effect size – and considering the possible costs associated with taking aspirin – it is dubious whether it is worth taking at all.
3. And effect size doesn’t mean useful
We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.
We can flip this around and use what is called Number Needed to Treat (NNT).
In normal conditions if two random people out of 100,000 would get that condition during their lifetime, you’d need all 100,000 to take the treatment to reduce that number to one.
4. Are you judging the extremes by the majority?
Biology and medical research are great for reminding us that not all trends are linear.
We all know that people with very high salt intakes have a greater risk of cardio-vascular disease than people with a moderate salt intake.
But hey – people with a very low salt intake may also have a high risk of cardio-vascular disease too.
The graph is U shaped, not just a line going straight up. The people at each end of the graph are probably doing different things.
5. Did you maybe even want to find that effect?
Even without trying, we notice and give more credence to information that agrees with views we already hold. We are attuned to seeing and accepting things that confirm what we already know, think and believe.
There are numerous example of this confirmation bias but studies such as this reveal how disturbing the effect can be.
In this case, the more educated people believed a person to be, the lighter they (incorrectly) remembered that person’s skin was.
CONTINUE READING: www.space.com/28912-mistakes-people-make-when-arguing-science.html?cmpid=514648