Statistics Done Wrong: Practical Tips for Avoiding Fallacies

The theme of this week’s posts is apparently “free web books on contentious topics.” Yesterday, it was typography. Today it’s statistics. In Statistics Done Wrong, Alex Reinhart presents a short guide to common problems with the way statistics is done in medicine, and the hard and soft sciences. Readers familiar with Andrew Gelman’s and John Ionnidis’s work will recognize most of the material, but Reinhart has done a nice job of packaging it all together into a short, comprehensible guide suitable for a student with relatively limited background (say in the middle of the first year required states sequence).

For example, Reinhart offers a nice example of the problem of assuming that “no significant difference” in an underpowered study implies no actually significant difference.

In the 1970s, many parts of the United States began to allow drivers to turn right at a red light.

Several studies were conducted to consider the safety impact of the change. For example, a consultant for the Virginia Department of Highways and Transportation conducted a before-and-after study of twenty intersections which began to allow right turns on red. Before the change there were 308 accidents at the intersections; after, there were 337 in a similar length of time. However, this difference was not statistically significant, and so the consultant concluded there was no safety impact.

Based on this data, more cities and states began to allow right turns at red lights. The problem, of course, is that these studies were underpowered. More pedestrians were being run over and more cars were involved in collisions, but nobody collected enough data to show this conclusively until several years later, when studies arrived clearly showing the results: significant increases in collisions and pedestrian accidents (sometimes up to 100% increases). The misinterpretation of underpowered studies cost lives.

Overall, I enjoyed the guide and recommend it, especially Reinhart’s suggestion for actions, also the conclusion:

Your task can be expressed in four simple steps:

1. Read a statistics textbook or take a good statistics course. Practice.
2. Plan your data analyses carefully and deliberately, avoiding the misconceptions and errors you have learned.
3. When you find common errors in the scientific literature – such as a simple misinterpretation of p values – hit the perpetrator over the head with your statistics textbook. It’s therapeutic.
4. Press for change in scientific education and publishing. It’s our research. Let’s not screw it up.

A rousing academic call to arms if ever there was one!

Advertisement
Comments are closed.
%d bloggers like this: