Read: How Numbers Rule the World

  • Disappointing.

    How Numbers Rule The World is mostly not about the use and abuse of statistics in global politics as promised on the book’s cover. The lack of good, appropriate statistics, the over-reliance on only seemingly objective measures is a theme of Fioramonti’s book, yes. His focus, however, is on his aversion against market solutions to social challenges and the creeping commodification in all domains of life.

    I agree, a badly designed market, a sloppy implementation of a well-designed market, special allowances that circumvent the intention of a well-intentioned policy (like a market for emissions) are all issues that deserve a good rant. (Blind) Management by numbers, fudging of data, badly designed incentives, and rent-seeking behavior are also topics that deserve a good rant. Put these in a book with a title that promises an in-depth analysis of the use and abuse of statistics in politics, how statistics shape the world and you have a serious case of false advertisement, wilful deceit, and a disappointed reader.

    Pair this with constant references to non-relevant (to the current issues) quips by prominent and less well-known persons, references to anecdotal screw-ups in support of your rant, and the occasional tangential discussion of philosophical issues and the reader is not just disappointed.

Read: Counterfactuals and Causal Inference

  • Counterfactuals and Causal Inference is a very practical book that discusses the different approaches to identify causal effects (in non-experimental and experimental data) at a very abstract level. Depending on the reader this may be a good or not so good thing. I had to expend substantial effort to work through the text and I fear that even though I understand directed acyclical graphs I have not developed any intuition in their application that would help me in my applied modelling. Often, the text remains at a too abstract level.

    What the text is missing is an even more practical guide with more concrete applied problems and their solutions. Yet, the text is good. It’s not a handbook for a quick how to do it. It’s not a textbook for undergraduates. It’s a critical survey of the state of the art of statistical approaches for the identification of causal effects. It’s a valuable reminder that the regression approach is no magic bullet.

    That being said, the text raises the important question of identification and alerted me that some effects that we estimate and report may not be the effects that we would like them to be. I guess I will have to be even more careful when I interpret regressions in the future.

    Addendum: I have read the first edition that I had for already some years sitting on my to-read shelf. I just discovered that there is a 2nd, revised edition available.

Read: Statistics Done Wrong

  • Reinhart’s Statistics Done Wrong is a refreshingly entertaining exposition of typical and embarrassingly widespread problems with the statistical analysis in (published) research.

    It is not a textbook. It is non-technical. There are no formulas and only very few numbers. Nevertheless, it teaches the art of statistics. It may even instill the wish in the (un)initiated reader to pick up a statistics textbook and finally learn the stuff. As such it may be a good gift for a first year PhD researcher. Knowing about statistical power and related concepts before any data is collected can dramatically improve any research design and thus the final research (article).

    There is nothing new in Statistics Done Wrong. All problems and all the examples chosen to illustrate them are already well known or were at least discussed in the usual blogs on applied statistics and data analysis. It is obvious that Reinhart follows, e.g., Andrew Gelman’s blog. Of course, he does. Everyone interested in the use and abuse, in good and bad practice of statistics follows (or should follow) Andrew’s blog. Nevertheless, Reinhart adds additional value. His writing is clear and accessible.

    I have only one quibble: Reinhart states in the preface that he is not advocating any of the recent trends in and attempts to improve the practice of statistics: may this be the complete abandoning of p-values, the use of “new statistics” based on confidence intervals, or a switch to Bayesian methods. Actually, he is advocating rather strongly for the use of the “new statistics”. He advocates the use of effect size estimates and confidence intervals over vanilla p-values. This is absolutely fine. Yet, he should stand openly to this position and not deny it.

Read: Generalized Linear Models for Categorical and Continuous Limited Dependent Variables

  • On first impression, the small textbook by Smithson and Merkle is a nice companion for Agresti’s Categorical Data Analysis and Analysis of Ordinal Categorical Data. It briefly discusses the theoretical foundation of the applied modelling approaches, explains the models using concrete examples, and provides a brief introduction to the relevant R (and Stata) functions.

    On a more careful inspection, however, it becomes clear that the discussions are often too shallow. In particular the applied models would have benefited from more detail. The reader is referred to other textbooks for the missing details that would be necessary to really learn and understand why a certain approach should be taken and how to interpret and check any estimations. The text cannot stand alone. Its contribution is, thus, a mere cursory overview of a few select functions in R (and stata). Some additional functions for R are provided on a accompanying webpage. What, of course, begs the questions why the authors did not package these functions in an R library that is made available an the standard electronic archive for R, CRAN.

    What really made me question the text, however, were phrases like: “…its p value is 0.057, which conventionally would not be regarded as not quite significant…”, and “This model is not quite significantly superior to the preceding one (… p=0.068).” This is not quite good scientific practice. In a textbook of all things.

Read: Understanding The New Statistics

  • Understanding The New Statistics is about understanding statistics and applying statistical methods that are not new at all. They are just under-used in the social and behavioral sciences.

    It is all about abandoning Null Hypothesis Significance Tests and replacing them with the more informative Effect Sizes and Confidence Intervals. Targeted at students as a complementary text to their standard textbook the most important and distinguishing feature of Cumming’s book is its attempt to create intuition for the variability of data and derived statistics. The many excercises that rely on simulating (small) data (sets) and observing the variability of summary statistics are a great tool for understanding the properties and interpretation of these statistics.

    Nevertheless, beyond facilitating said intuition the text has little additional value. The theory, the necessary math is often not presented. The exercises and indeed much of the book rely on a (free) proprietary software that I cannot use since it depends on another commercial software that I don’t own and would have never used for statistics (excel). Therefore, much of the text remained cryptic. I would have preferred an open source approach, maybe an R package.

    Further, for a text that is advocating replacing NHST with substantial statistics on effect sizes and uncertainty there are too many asterisks signifying different levels of statistical significance. More surprising was, however, the absence of any glimpse at Bayesian methods that would fit the bill perfectly, showing likely effect sizes and their corresponding uncertainty. In the context of meta-analysis I would have expected an updating of our beliefs, a Bayesian aggregation of the accumulating evidence. Instead, the text remains 100% frequentist.

    In the end, the text is maybe not for the student but for the teacher. And maybe the text should not be read for its content in a narrower sense but for the ideas on pedagogy on how to teach introductory statistics.

Read: How not to be wrong

  • With “How not to be wrong” being about mathematical thinking I was a bit surprised about how much of it was about statistics. And even though it (may) lack(s) the depth of critique of the (ab)use of statistics that can be found in the works of Ziliak and McCloskey or Gigerenzer it is a very good popular treatment of the topic. Worth the read.

    A particular additional added value is – in my opinion – the reminder that most things in the real world are not linear. Linearity is just an approximation, valid for only (very) small ranges. I agree with Ellenberg, we – I – forget this too often.

    The only thing that I did not like was the sports references (I can condone idiosyncratic tastes in music). The book includes lots of footnotes and endnotes with references. So many, and so many recent ones that I, indeed, found a few new sources that I added to my to-read list. That is rare.

Read: Numbers Rule Your World

  • Numbers certainly rule my world. Statistics is one of my most important research tools. So why not having a look at another pop-statistics book?

    Kaiser Fung’s Numbers Rule Your World manages to introduce some fundamental concepts of statistics without actually doing any statistics. Each concept is introduced in the context of two case studies that highlight different aspects of the concept. Showing that different objectives lead to a different approach to the data, utilizing the same statistical concept in different ways. Further, instead on giving another introduction to the standard basics like central tendency, random processes and so on Fung focuses on a few central, more general concepts. Thus, he intends rather to instill some statistical thinking than to instruct the reader in specific methods.

    Let me single out two statistical concepts from his list of five that I believe to be the most important.

    First, heterogeneity matters (Fung writes ‘variability matters’), the mean hides all the interesting stuff. Much of my research interest can be summarized with this exact same statement. There is no real representative agent in economics. Differences in preferences and behavior, in the intensity of reactions to a stimulus (or treatment in an experiment) are the really interesting part of the story. Studying human behavior without this heterogeneity would be rather boring.

    Second, there are two types of errors in statistical tests – false negatives and false positives – and we attach different costs to these errors. There is always a trade-off. You cannot decrease the error rate of one without increasing the other error (keeping the data constant and just moving the decision threshold around). Indeed asymmetric costs can also be attached to errors in a continuous model. And these asymmetric costs often exit in real life, a negative deviation from a target may have a considerably different impact on a decision than a positive deviation.

    Both ideas seem often neglected in applied work. It is not the differences between individuals that is studied, it is often the general tendency of the whole group that is reported. Between-subject heterogeneity is hidden. And, often only one type of error is explicitly mentioned. Mr. P tells you only about the false positive. (Too often, I have to plead guilty on the second charge, too.)

    In the end, however, Fung’s book is not for the (applied) statistician or the seasoned researcher applying statistics in his work. It is for lay-people and it may be well worth their time to have a look at it.

Read: The Cult of Statistical Significance

  • I think that is one of the big problems she sees in empirical economists. They agree but still do otherwise. I also had the good luck to meet Gerd Gigerenzer, a psychologist and fellow warrior against the misuse of statistics, and discuss this particular topic with him during a sociable evening after a long day full of presentations at a remote conference venue of the Max Planck Society. Yes, there is something wrong with our (that is the economist’s) way of relying on, reporting, and interpreting statistics and specifically statistical significance.

    How the Standard Error Costs Us Jobs, Justice, and Lives is not only the subtitle of Ziliak and McCloskey’s manifesto The Cult of Statistical Significance it is quite indicative of their (strong) rhetoric.

    The book can be roughly divided in two parts that are devoted to different “themes”. The first is an updated and extended rehash of their earlier articles on the current practice of relying on statistical significance in various fields. If you have not read their articles so far read this and be shocked. You will see the author’s outrage in every paragraph. The second part and theme is a historical account that tries to shed light on how we ended up where we are. This part is rather filled with bitterness and repugnance for R. A. Fisher and compassion for the neglected Mr. Student, William Sealy Gosset.

    Ziliak and McCloskey’s rhetoric is unique, yet it is not always to their benefit. Though, they certainly make their point and at least in private you have to agree with them. All in all, the book is entertaining and instructive. Even so, I would not assign this book to a class for reading I would rather recommend the 2004 special issue of the Journal of Socio-Economics on this topic. On the other hand, every empirical scientist and every policy maker relying on scientific research (shouldn’t they all?) should be aware that, first, size matters and that precision of measurement should not be the only decision criteria.

Read: Guide to Information Graphics

  • Now, that was a waste of money. Don’t get me wrong. Dona Wong’s Guide to Information Graphics is a nicely designed little book with some valuable advice on how to present quantitative date. Why is it a waste of money? It does not go beyond very small data sets and few closely related time series. The data we talk about is so sparse that even the dreaded pie chart cannot distort the perception of the depicted quantities by much and consequently is discussed in this little book.

    Though, book may be an overstatement; booklet seems more appropriate. And despite only being about 150 pages ‘thick’ there are some repetitions in its content. This is often a good didactic move. For a reference book not so much.

    Since Dona Wong is a student of Edward Tufte it makes sense to rather refer to his work. So instead of looking into Guide to Information Graphics have a look at:

    Another “Old Master” is William S. Cleveland and his

    If you rather need an overview of different types of plots and ways to present data Information Graphics - A Comprehensive Illustrated Reference by Robert L. Harris is the reference you look for.

    Not as nicely designed as Dona Wong’s Guide, yet with considerable more content is Naomi Robbins’ Creating More Effective Graphs.

    And finally, I rather enjoyed reading Howard Wainer’s Picturing the Uncertain World. Though it is more a historic account of the development of good and effective graphical displays.

Read: The Drunkard's Walk - How Randomness Rules our Lives

  • While Ayre’s Super Cruncher invited to find patterns in seemingly random data (and running controlled experiments to assess differences in treatments, e.g. maximizing sales revenue by the “right” choice of book title) Mlodinow’s The Drunkard’s Walk is more a warning of seeing patterns in seemingly non-random data.

    Life is full of randomness and Mlodinow’s little book raises some awareness to the random factor in our lives. He gives a nice historical account of the concept of randomness in mathematics and other sciences as the disciplinary borders were once not as distinct as they are now. This reminded me a bit of Peter Bernstein’s Against the Gods. Though Mlodinow’s work is considerably shorter and more focused owing to the more directed topic of his book. My recollection may be wrong, yet I believe his work is also more sanguine.

    In short, he did a very good job. The Drunkard’s Walk is entertaining, balanced and instructive and covers considerably more than just the economic side of randomness: the chance element in our lives, luck and misfortune, the misperception of probabilities and causality, and psychological biases. Finally, he also cautions all those who rely a little too much on their statistics…