Read: Economics - The User's Guide

  • Chang’s Economics: The User’s Guide is less a guide to economics as it is practiced but more to an economics that should be: A pluralistic study of the economy, not one single approach to analyze everything.

    Chang introduces to some of the non-mainstream approaches to economics and shows where they differ from the mainstream by discussing concrete current economic issues (with a slight emphasis on macroeconomics) and how we got there. As such, the Guide is rather a work in economic history and comparative economics. The presentation of relevant statistics on a wide range of countries to provide context is commendable.

    Underlying the obvious narrative, an introduction to economics, there seem to be two theses. First, economics is inseparable from politics. Politics provides the institutional context, the rules of the game, economic analysis has to be performed within this frame. Economics, however, provides the reasoning for many political decisions, it “is a political argument.” The two fields cannot be without the other. Second, as a result, all economics is normative. There is no value-free economic analysis. “[T]here are no objective truths in economics that can be established independently of political, and frequently moral, judgements.” Hence, “[e]conomics is not - and can never be - a science.”

    While I mostly agree with the first thesis, I do not agree with the second one. Economic analysis (in particular of microeconomic issues) can be purely descriptive, positive. Without some understanding of the causal links, the mechanics of the economy, the decision-making behavior and process a meaningful normative analysis would not be possible. I would also argue, that, e.g., experimental economist are not just going through the motions of the scientific process of knowledge generation, they are scientists: Observing, hypothesis generating, falsifying, and theory building scientist.

Read: Foundations and Fundamental Concepts of Mathematics

  • While Eves’ Foundations and Fundamental Concepts of Mathematics is certainly a bit outdated by now – it is 1997 reprint of a textbook originally published in 1990 – it was still fun and interesting to read.

    The books offers a nice historical overview of fundamental concepts of mathematics (hence the title) that includes not just the historical background but a solid introduction of each concept itself. Of course, solid means here that the introduction just provides as much depth as is needed to understand what it is all about. As such the different chapters may whet one’s appetite for more on the respective topic. Just when it gets interesting the text stops. It has to. Otherwise, Eves would not be able to cover as much as he does.

    Sometimes maybe, the little detail that is given can also already seem a bit too much. Getting to a theorem 48 in just one chapter shows that Eves is certainly not just skipping over details when he feels the reader may benefit from a rigorous presentation of the material.

Read: Generalized Linear Models for Categorical and Continuous Limited Dependent Variables

  • On first impression, the small textbook by Smithson and Merkle is a nice companion for Agresti’s Categorical Data Analysis and Analysis of Ordinal Categorical Data. It briefly discusses the theoretical foundation of the applied modelling approaches, explains the models using concrete examples, and provides a brief introduction to the relevant R (and Stata) functions.

    On a more careful inspection, however, it becomes clear that the discussions are often too shallow. In particular the applied models would have benefited from more detail. The reader is referred to other textbooks for the missing details that would be necessary to really learn and understand why a certain approach should be taken and how to interpret and check any estimations. The text cannot stand alone. Its contribution is, thus, a mere cursory overview of a few select functions in R (and stata). Some additional functions for R are provided on a accompanying webpage. What, of course, begs the questions why the authors did not package these functions in an R library that is made available an the standard electronic archive for R, CRAN.

    What really made me question the text, however, were phrases like: “…its p value is 0.057, which conventionally would not be regarded as not quite significant…”, and “This model is not quite significantly superior to the preceding one (… p=0.068).” This is not quite good scientific practice. In a textbook of all things.

Read: Understanding The New Statistics

  • Understanding The New Statistics is about understanding statistics and applying statistical methods that are not new at all. They are just under-used in the social and behavioral sciences.

    It is all about abandoning Null Hypothesis Significance Tests and replacing them with the more informative Effect Sizes and Confidence Intervals. Targeted at students as a complementary text to their standard textbook the most important and distinguishing feature of Cumming’s book is its attempt to create intuition for the variability of data and derived statistics. The many excercises that rely on simulating (small) data (sets) and observing the variability of summary statistics are a great tool for understanding the properties and interpretation of these statistics.

    Nevertheless, beyond facilitating said intuition the text has little additional value. The theory, the necessary math is often not presented. The exercises and indeed much of the book rely on a (free) proprietary software that I cannot use since it depends on another commercial software that I don’t own and would have never used for statistics (excel). Therefore, much of the text remained cryptic. I would have preferred an open source approach, maybe an R package.

    Further, for a text that is advocating replacing NHST with substantial statistics on effect sizes and uncertainty there are too many asterisks signifying different levels of statistical significance. More surprising was, however, the absence of any glimpse at Bayesian methods that would fit the bill perfectly, showing likely effect sizes and their corresponding uncertainty. In the context of meta-analysis I would have expected an updating of our beliefs, a Bayesian aggregation of the accumulating evidence. Instead, the text remains 100% frequentist.

    In the end, the text is maybe not for the student but for the teacher. And maybe the text should not be read for its content in a narrower sense but for the ideas on pedagogy on how to teach introductory statistics.

Read: How Markets Work

  • After a few a little bit more radical, heterodox critiques of current (textbook) economics Prasch’s “How Markets Work” is rather orthodox. It still deviates from the standard introductory textbook treatment of markets in the sense that it does not blindly follow The (competitive) Market is best doctrine that advocates the market institution as the easy solution to many problems – if only the market was unregulated. Yet, the critique focuses rather on the unreflected application of the perfectly competitive commodity market model to goods and services that do not fit into the standard commodity category.

    Hence, Prasch discusses the peculiar deviations of specific markets that render the standard textbook toy model inapplicable. He discusses e.g. financial asset markets that are characterized by positive feedback loops instead of negative feedback loops and that are therefore not necessarily self-stabilizing and labor markets that feature non-monotonic supply curves that bend backwards, forwards, and backwards again and may have four different equilibria, two stable and two unstable one, at different levels of wages. He also touches upon the issue of prices, values, and incommensurability. There are contexts in which the orthodox utility framework seems not to apply, where the choice problem cannot easily be represented by a scalar utility model.

    Overall, Prasch’s “How Markets Work” is utterly unspectacular, non-revolutionary, orthodox, and just well thought-out. The didactic approach, starting with a discussion of property rights, is impeccable. As an added benefit the book is easily accessible also for the uninitiated and mostly non-technical as even the number of graphs is kept at a minimum. It may be a good supplementary reading for any introductory (micro-) economics course covering the analysis of demand and supply.

Read: The Economics Anti-Textbook

  • Teaching evaluations are just in and it does not look bad. The changes I implemented during the last fall term had some positive impact. Though there is still room for improvements, and I already have a few ideas… I am a bit surprised though that there are some students demanding “more math (it is ultimately economics)” – these were principles courses. Given the huge heterogeneity in math skills this is not going to happen! And I also don’t think there is much to be gained by applying math to the over-simplified models of a first year principles course in economics. For grad school they should rather take dedicated math courses.

    Indeed, I rather want to strengthen the discussion part of the course, the critical reflection of the theory.

    Enter the Economics Anti-Textbook: A Critical Thinker’s Guide to Microeconomics by Rod Hill and Tony Myatt. The Anti-textbook is a great source for inspiration for such in-class-discussions. It provides a nice anti-thesis to the standard neoclassical textbook treatment, it makes the underlying value judgments explicit, and it provides an antipole to the doctrine of fundamentalist free-market theory (see Jim Stanford in Labour/Le Travail for a review that mirrors my own sentiment of the book quite well).

    Much of their critique is not new. However, they provide a very accessible juxtaposition of orthodox and heterodox views, and a set of very thought provoking questions that should get the discussion started after students were treated with the standard view in class. Hence, this is where some of the ideas for next fall will come from.

    I like this anti-textbook much more than Steve Keen’s Debunking Economics (that I also mentioned to the more curious students this fall). The anti-textbook is accessible and perfectly suited as a companion for a microeconomics principles course (a macroeconomics equivalent is still in planning). Thus it will be included in my recommended readings list in the future.

Read: How to Read a Book

  • I believe you can get an idea on how to write well by reading. Not just by reading the “right” books that set an example, that provide you with a blue print for your own writing, but also by reading well.

    Adler and van Doren’s How to Read a Book is a guide for reading well. Their main lessons are maybe to ask a certain set of question that your reading of a book, any text really, should answer and that every text deserves its own speed of reading. Some texts should be read carefully, slow, repeatedly. Other texts should be read fast, cursorily, or not at all.

    The meat of the book covers analytical reading that should lead to answers to four crucial questions:

    1. What is book about as whole?
    2. What is being said in detail, and how?
    3. Is the book true, in whole or part?
    4. What of it?

    and provides a set of 15 rules or recommendations that help in the process to discover the answers and judge the text. This is considerably more detailed than my own two guiding questions so far:

    1. What is this about?
    2. So what?

    The book has a little bit too much meat, it tries to convince and justifies every little recommendation. This leads to some repetitions. (There were moments when I was reminded of Monty Python’s The Holy Hand Grenade.) Nevertheless I did not dare to skip any part. This is one of the books that deserve to be read well. (See http://sachachua.com for a nice visual summary and http://www.farnamstreetblog.com for a longer discussion of the book’s content.)

    It deserves to be read well for some of the hidden gems that do not necessarily (only) relate to reading well. My attention was in particular caught by:

    Discovery stands to instruction as learning without a teacher stands to learning through the help of one. In both cases, the activity of learning goes on in the one who learns. It would be a mistake to suppose that discovery is active learning and instruction passive. There is no inactive learning, just as there is no inactive reading.

    This is so true, in fact, that a better way to make the distinction clear is to call instruction “aided discovery.”
    and

    Teachability is often confused with subservience. A person is wrongly thought to be teachable if he is passive and pliable. On the contrary, teachability is an extremely active virtue. No one is really teachable who does not freely exercise his power of independent judgment- He can be trained, perhaps, but not taught.


    Needless to say, How to Read a Book will make it onto my students’ reading list.

Read: Experimental Economics - Rethinking the Rules

  • In contrast to what some economist today still say and believe, economics is an experimental science. At the latest, when the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel was award to Daniel Kahemann (a psychologist) and Vernon Smith (an economist) in 2002 they should have acknowledged it.

    Economic experiments have been proven useful in informing theory and testing (new) economic institutions before their implementation on a broader scale, e.g. the design of spectrum auctions that generated unprecedented revenues for the states running them. Unfortunately even within the community of experimental economist their use and purpose is not without controversy. Some, let’s call them experimental economists in a narrower sense see the main use of experiments in economics in showing that the theory works (well) and finding instances of when it works best. The other group, let’s call them behavioral economists see the economic experiment as one method to investigate the underlying assumptions of economic theory in order to inform theory building and inspire the revision of economic theories so that they may move more towards a positive than a normative model of the world.

    With Experimental Economics a group of six British experimental economists now tried to critically assess the current state of the field that constitutes an invaluable tool for research in all areas of economics.

    In a series of chapters they address the method and methodology of experimental economics, the domain of economic theory (where and when does it apply?) and the limits of experimental tests in terms of what can be said about the theory and the external validity of the experimental observations; and also how experiments are used as rhetorical devices, “exhibits” that reliably show some particular behavior of their participants illustrating a specific point. Two further chapters address the important issue of financial incentives in experiments (when are they needed, how should they be implemented?) and different sources of noise in the data that requires bespoke statistical treatment.

    The last point, noise in the data and heterogeneity between subjects is in my opinion a very important one as this is still often a neglected topic in most experimental studies today. Of course, a well designed experiment may allow the authors to show their main point without any fancy statistics. On the other hand, in order to move to a positive theory of economic behavior the individual and not the aggregate behavior should be the focus of the analysis. This necessarily requires a more advanced statistical treatment of the data. As well as laboratory and field experiments (and happenstance data) are complements so are theory, experiments, and statistics complements.

    In sum, even though I may not agree with some of the more specific points Bardsley, Cubitt, Loomes, Moffatt, Starmer, and Sugden make their book is an excellent text that will make it on the reading list for my courses in experimental economics.

Read: Guide to Information Graphics

  • Now, that was a waste of money. Don’t get me wrong. Dona Wong’s Guide to Information Graphics is a nicely designed little book with some valuable advice on how to present quantitative date. Why is it a waste of money? It does not go beyond very small data sets and few closely related time series. The data we talk about is so sparse that even the dreaded pie chart cannot distort the perception of the depicted quantities by much and consequently is discussed in this little book.

    Though, book may be an overstatement; booklet seems more appropriate. And despite only being about 150 pages ‘thick’ there are some repetitions in its content. This is often a good didactic move. For a reference book not so much.

    Since Dona Wong is a student of Edward Tufte it makes sense to rather refer to his work. So instead of looking into Guide to Information Graphics have a look at:

    Another “Old Master” is William S. Cleveland and his

    If you rather need an overview of different types of plots and ways to present data Information Graphics - A Comprehensive Illustrated Reference by Robert L. Harris is the reference you look for.

    Not as nicely designed as Dona Wong’s Guide, yet with considerable more content is Naomi Robbins’ Creating More Effective Graphs.

    And finally, I rather enjoyed reading Howard Wainer’s Picturing the Uncertain World. Though it is more a historic account of the development of good and effective graphical displays.

Read: Mostly Harmless Econometrics

  • Reading statistics or econometrics textbooks cover to cover is certainly not something any “normal” person would do. So, I am not normal. And so ain’t Mostly Harmless Econometrics by Angrist and Pischke.

    You cannot learn econometrics just by reading this book, you would need another textbook for the basic econometric theory. Yet, MHE offers something often not found in your standard textbook: an applied perspective. It addresses issues that may arise from empirical work in labor and micro-economics focusing on identification of causal effects, illustrating the methods and pitfalls using empirical field studies that either rely on natural experiments (happenstance data) or field experiments.

    Their brief chapter on nonstandard (i.e. nonstandard according to the theoretical ideal, the real world looks different) standard errors is, for instance, astonishingly accessible and almost makes me revise my standpoint on modelling the error structure (using multilevel designs) vs adjusting standard errors.

    I do not know whether science geeks are still attracted by Adams’ Hitchhiker’s Guide to the Galaxy. Angrist and Pischke, sure, are. Not only is the title of their textbook an obvious reference to Adams’ work, they start every chapter with a little Adams quote. Something I did, too, when I was still in graduate school. This gives their book a slightly brighter, less earnest tone. All in all, it is certainly not as dry as many other econometrics textbooks.

    As an additional added value, Angrist and Pischke set up a companion website to their companion where they post corrections (there are already quite a number of erratas) and comments to MHE.