Read: Statistics Done Wrong

about.me Follow me on Facebook Follow me on Goodreads Follow me on Twitter

Reinhart’s Statistics Done Wrong is a refreshingly entertaining exposition of typical and embarrassingly widespread problems with the statistical analysis in (published) research.

It is not a textbook. It is non-technical. There are no formulas and only very few numbers. Nevertheless, it teaches the art of statistics. It may even instill the wish in the (un)initiated reader to pick up a statistics textbook and finally learn the stuff. As such it may be a good gift for a first year PhD researcher. Knowing about statistical power and related concepts before any data is collected can dramatically improve any research design and thus the final research (article).

There is nothing new in Statistics Done Wrong. All problems and all the examples chosen to illustrate them are already well known or were at least discussed in the usual blogs on applied statistics and data analysis. It is obvious that Reinhart follows, e. g., Andrew Gelman’s blog. Of course, he does. Everyone interested in the use and abuse, in good and bad practice of statistics follows (or should follow) Andrew’s blog. Nevertheless, Reinhart adds additional value. His writing is clear and accessible.

I have only one quibble: Reinhart states in the preface that he is not advocating any of the recent trends in and attempts to improve the practice of statistics: may this be the complete abandoning of p-values, the use of “new statistics” based on confidence intervals, or a switch to Bayesian methods. Actually, he is advocating rather strongly for the use of the “new statistics”. He advocates the use of effect size estimates and confidence intervals over vanilla p-values. This is absolutely fine. Yet, he should stand openly to this position and not deny it.

Tags: