about.me Follow me on Facebook Follow me on Goodreads Follow me on Twitter

Read: Rise of the Terran Empire

The decline and the fall of the Polesotechnic League: The style, approach, and content is different but Anderson’s (short) novels reminded me of Asimov’s Foundation. It is grand.

I liked the “intergalactic entrepreneur as hero”-theme; I would have liked to read more of it. Cut-throat, scheming business men, space exploration, (inter-species) camaraderie, and philanthropic, culture-preserving, selfless acts, and maybe a tad too much space battle make for very good entertainment (and laymen social science).

As this book marks the end of the Polesotechnic League trilogy collection it is also a new beginning. Let’s see what the collection of Flandry novels will offer.

Read: The power of fifty bits

The Praise. What happens when a practitioner writes about behavioral sciences’ insights and their applications? You get a refreshingly different perspective, refreshingly new examples for behavior change strategies that work, and in this particular case a refreshingly balanced discussion of the underlying ethical principles.

In contrast to many other authors in the popular behavioral science genre Bob Nease does not write about human irrationality. That is already a reason to recommend his book: The term irrationality is often misunderstood as stupidity, and some authors seem to emphasize this interpretation, pushing the need for paternalistic advice and guidance. Nease, on the other hand, is focusing on bounded rationality – limited cognitive ability, limited willpower, (and limited self-interest) – as the result of a long evolutionary process. People are not (that) inherently stupid and do not need to be guided by a better-knowing and well-meaning paternalistic entity, our decision processes and the resulting (in)actions are just maladaptive as a consequence of a rapidly changed environment. As a result, there is an intention—action gap. A gap that can be closed with the right choice architecture, or rather action-taking architecture. Since the intentions are already there it just needs to get easier to follow through with them.

Nease offers a small set of strategies that have this goal in mind: Making it easier to follow through (when the inaction is caused by inattention and inertia). The strategies are, of course, not new or unique to Nease’s insight. Default options, mandated choice, and framing are discussed at length in the academic and the popular behavioral science literature. Yet, the examples from his personal work experience and the reminders (for the action-taking-environment engineer) for the constant need to experiment make his book entertaining, instructive, and hence worthwhile to read.

Fifty bits is a rather short, concise, and focused book. There is no padding. Indeed, some parts could have been slightly more detailed. As a result, there is no excessive hurdle for picking up the book and reading it. I guess this feature of the book, making it easier to pick it up and read it in full, is intentional.

The critique. While Nease is very careful to present a balanced discussion, to consider the ethics of behavior changing interventions, and to call for intellectual honesty, avoidance of deception, and generally being “nice” there is an inconsistency in one of his arguments. At least, I cannot agree with his rationale for choosing between mandated choice and defaults with opt-out.

First Nease recommends requiring an active choice (chapter 3), what is sometimes also called “mandated choice.” Then, however, in chapter 5 (“Let it Ride”) he also recommends setting defaults with opt-outs and tries to establish a rule when to ask for an active decision vs when to rely on the default.

It is all linked to the “Effort [cost] of Active Choice”, “Effort [cost] of Opting out”, and the “Fraction who would Opt out.” Setting a default with opt-out leads to a larger behavioral effect at the population level. If the expected cost of opting out is lower (at the population level) than the cost of active choice the choice architect should implement a default with opt-out.

This may sound reasonable. So, let me add one of Nease’s insights that he offers in chapter 9 “Simplify wisely” under the heading “Why is easy so good?”

The bottom line is that what logically looks like a small bump on the road to better behavior psychologically looks more like a wall. (p. 135)

Seemingly small obstacles can be associated with high psychological costs. Looking only at the physical cost to opt out, let’s say looking only at the time it would take, neglects these psychological costs (to overcome inertia). Naively applied – as in Nease’s example –, the rule would too often recommend setting a default with opt-out instead of mandated choice.

Yet, there is another reason why I do not like this advice and cannot agree with the presented rationale for it.

The rule is based on a cost to society argument and the, at least, theoretical possibility to compensate the losers (i. e. aiming at a Kaldor-Hicks improvement). However, since compensation never occurs it remains unclear whether there is indeed an increase in society’s welfare. Further, this requires that the disutilities and utilities caused to the individual members of society can be compared across individuals and aggregated in a meaningful way. I would contest this assumption. (For more and more eloquent critiques on this kind of social welfare improvements see the works of Arrow, Baumol, Bergson, Little, … ) Mandated choice, on the other hand, does not require any of these interpersonal utility comparisons.

Obviously, if my critique is on such a technicality, a point that is often dismissed and ignored in practice it cannot be that bad. Let’s just say I would prefer active choice on principle. It seems more honest, more autonomy preserving, maybe even autonomy enhancing (if the action-taking-environment engineer does not add peer pressure).

Disclosure: The author, Bob Nease, sent me a copy of his book for free. Thank you, Bob! I enjoyed it.

Read: Black Order

With now having read the third book in a series by James Rollins, he is now officially part of my rotation.

Black Order is a nice mix of action adventure, thriller, and science fiction. It is certainly not (high brow) literature but it is good for relaxing a couple of hours. Even though the characters (within a given book) still remain a bit flat, over the course of several novels in the series there is now some noticeable character development.

And even though I did not want to think (much) while reading the (such a) book, there was an interesting take on intelligent design.

Two things annoyed me.

First, the publisher should spend some money on a foreign language editor before a book is printed. There are some foreign language words and phrases (as it happens, most of them in German) that are just plain wrong. At least once I could only get the meaning after trying to conceive how an automatic translation would translate that phrase. At any rate, the title of the book should be translated correctly: And no, “Black Order” is not “Schwarzer Auftrag.”

Second, the book could have ended one supernatural experience earlier.

Read: Phishing for Phools

Phishing for Phools leaves me with a rather ambivalent feeling. Some parts I like and found interesting, in other parts Akerlof and Shiller seem to just state the obvious, and in the remaining parts they offer interpretations that I cannot agree with. The particular mix that equates legal and illegal actions, welfare enhancing activities and plain fraud seriously subtract from the (entertaining) value of their little book.

I like their discussion of finance and fraud. (They are not the first to offer such an account.) And I agree that “greed” (that is not really a bad thing in it self), the (bad) design of incentives, and the lack of proper regulation to ensure well functioning markets in the presence of information asymmetries are all contributing to the problems we have observed.

Also their discussion of the misaligned incentives in the pharma industry is not controversial. Others have, of course, beaten that horse before. Bad Science / Bad Pharma by Ben Goldacre come to mind. Recommending better regulation to reduce the problems caused by information asymmetries here is not a controversial issue.

The voter’s rational ignorance and the influence of special interest groups are also ultimately linked to information asymmetries. In contrast to the authors who seem to like to regulate lobbying more I do not believe that more and stricter regulation will necessarily lead to a better outcome of the political decision making process. It may reduce some waste, resources spent on lobbying may find better uses somewhere else. It will not change anything about he voter’s ignorance.

Though there is nothing really new up to here – the authors admit that their book may not offer anything new except for their interpretation – these parts are both instructive and, yes, entertaining.

Finally, the moment Akerlof and Shiller talk about Phishing for Phools that is not linked to information asymmetries (that moment actually comes first in their book) I cannot agree with them. The provision of goods in convenient ways and places is not a bad. Yes, ceteris paribus I like to live healthy. But if I buy donuts the trade-off between the immediate satisfaction of my needs and wants and the long run effects of that satisfaction is decided. By me. I do not need any paternalistic restrictions of my choice set. Educate me but do not tell what to do or take my choices away.

I really do not like how the entrepreneur who provides a valuable service to his customers is placed on the same level as the con man and the financial fraudster, or even the price discriminating used car sales person.

Tags: 

Read: The Edge of Madness

Michael Dobbs’ novel The Edge of Madness is rather on the edge of disappointing.

For a cyber-thriller there is too little cyber, too little [or or even none] ‘wow, this is what technology can do nowadays.’ For a political thriller there is too little politics, scheming, plotting even though there are four different heads of their states involved in the plot. The characters are mostly cardboard cut-outs. Only the reluctant ‘hero’ gets a little more depth, some glimpses of his darker past.

The plot is rather constructed. The solution to the big problem is too convenient. In the end, the evil guys are all dead or get what they deserve. At the end of the episode all is back to normal.

Utterly unremarkable.

Read: How do you know?

Seemingly irrational behavior or rather bounded rationality is the result of bounded cognitive abilities, bounded willpower, bounded self-interest, and - yes - bounded knowledge. Russell Hardin offers an account of the consequences of - fully rational - limited knowledge, an economics of ordinary knowledge. The question is what extent of knowledge in terms of quantity and quality can we expect from an ordinary person.

Rational ignorance permeates all domains of our daily lives and not just public policy and politics. To illustrate his point, maybe even delineating an extreme, Hardin singles out religious belief. Believes are just one instance of knowledge by authority that lie at the core of an economics of ordinary knowledge. No one can gain expert knowledge in everything and hence has to take many bits and pieces of knowledge at face value from an authoritative source. What is an authoritative source and who is an authority from the perspective of an ordinary person then may limit the quality of knowledge, the extent of its objective truth. Hardin discusses the tension between science and religion, the individual and communal incentives to believe, sincerity, fundamentalism, and extremism. He draws a very bleak picture of society.

Even though Hardin acknowledges the existence of limits on cognitive abilities, will-power, and self-interest his analysis only drops the assumption of perfect knowledge, he is able to explain many seemingly irrational patterns in our behavior. His ordinary person still tries to maximize their utility and decides that obtaining more and better knowledge may not be worth its cost. People remain rational ignorant. Yet, already this small deviation from the standard economic analysis of decisions, choices under uncertainty and in strategic interactions seems sufficient to explain seemingly irrational, i. e. objectively sub-optimal, behavior.

Adding the further bounds to our abilities is not likely to improve the quality of our decisions and welfare. So yes, Hardin draws a very bleak picture indeed.

Pages