Welcome to what we expect will be a very interesting and productive discussion of Andreas T. Schmidt’s “Getting Real on Rationality—Behavioral Science, Nudging, and Public Policy.” The paper is published in the most recent issue of Ethics, and is available here. Luc Bovens has kindly agreed to contribute a critical précis, and it appears immediately below. Please join in the discussion!
Luc Bovens writes:
A persistent and recurring critique of the nudge agenda is that a government that nudges its citizens does not treat them as rational agents. Andreas Schmidt defends nudging by arguing that it builds on ecologically rational processes. I will make a case for the critique, present Schmidt’s defense of nudging, and conclude with two objections.
- The Critique
Here is the critique in a nutshell. Cognitive scientists have identified a range of biases or mechanisms of human decision-making that fall short of ideal rationality. Citizens fail to realize their goals due to these biases. Nudge policies restructure the environment in which citizens make their choices in a manner that is sensitive to these biases. Within the restructured environment, the biases themselves will actually be conducive to better choices, that is, choices leading to outcomes that the citizens themselves prefer. In doing so, government exploits mechanisms of irrational decision-making—that is, it does not treat its citizens as rational agents.
Let’s think of the two paradigmatic nudge policies in this light.
In Cafeteria, we change the order of the food items in a high school cafeteria so that the healthy options are placed first. Students presumably want to make healthy food choices, but weakness of the will leads them astray and has them reach for the cheesecake rather than the salad. But students are also subject to mindless choosing—they are more prone to grab whatever comes first in line rather than scan all the available options. So, if we put the healthy options first, students will be more prone to choose these through the mechanism of mindless choosing.
In Save More Tomorrow, the nudge policy moves up the point of time at which employees are asked to increase their contribution towards their retirement savings. Employees presumably want to save more, but myopia gets the best of them. But they are also subject to the endowment effect and time-inconsistent preferences. On the endowment effect, people care more for money in hand than for money not in hand yet. On time-inconsistent preferences, when asked on Monday, people prefer $110 on Friday to $100 on Thursday, but when Thursday comes around, they prefer $100 on Thursday (that is, now) to $110 on Friday (that is, tomorrow). So, if we ask employees well ahead of time whether they want to commit part of their prospective raise toward their retirement, they are more prone to invest in their future than when they are asked after they have gotten the raise. This is the case because they don’t have the money in hand yet, and because they are not asked to bear the costs of their investment right now.
In both paradigm cases the nudge policy helps citizens to make choices that are better for them by their own lights. In the original choice environments, weakness of the will and myopia got the best of them and they made bad choices. By nudging the choice environment, mindless choosing, the endowment effect and time–inconsistent preferences conspired to make them do the right thing—namely, make healthy choices and increase their retirement savings.
Here is an analogy. I am planning to take my child to the bike shop. My child is old and wise enough to realize that they should purchase a high-quality bike. But they are attracted by bright colors and are going through a dinosaur phase to boot. I am scouting out the shop today and, as I had feared, there is a grey bike that is first-rate, and the bright colored ones are complete trash. With special permission from the shopkeeper, I put a Barney the Dinosaur fluffy animal on the grey bike. My child comes in, Barney catches their attention and we walk out with the grey bike. Mission accomplished.
Just like in a nudge, I have made a change to the choice environment so that the bias (a predilection for bright colors) that would have lured my child into making a bad choice is cancelled out by another bias (a love for anything that is associated with dinosaurs). I manipulated my child’s biases to make them do the right thing. They are old and wise enough to know that bright colors and being displayed with Barney are not good reasons to buy a bike. When asked what’s so good about the bike they will offer a technical explanation that puts me to shame. But without Barney—trust me, it wouldn’t have happened.
I think that this may be an OK way to treat my youngest one. And I know my kids: Mutatis mutandis, I may be able to pull off something similar with my oldest one. I am not an expert on child-rearing, but it strikes me that there is an age at which it would become disrespectful to follow this strategy. As my child grows older, I should teach them to focus on the features that make the options choice-worthy. I should treat them as rational decision-makers and if they fail in this regard, then I should try to strengthen their decision-making capacities rather than lure them into making the right choice. If this is how we should treat our children as they become capable of rational decision-making, then, a fortiori, this is how government should treat its citizens.
- Ecological Rationality
Schmidt argues that this critique misunderstands what nudging is all about. Nudging, he says, does not exploit mechanisms of choice that are lacking in some way or other, but rather works with mechanisms of choice that are ecologically rational—that is, they are optimally fitted given the cognitive and computational capacities of the chooser within the environment in which the choice is made. These ecologically rational choice mechanisms offer the best chances to accomplish our goals.
What are these ecologically rational choice mechanisms? The inspiration lies in Gigerenzer’s Simple Heuristics research program over the last three decades, which in turn builds on seminal papers by Herbert Simon in the 1950s. The paradigm case of ecological rationality is Gigerenzer’s gaze heuristic. When catching a ball, we could, in theory, determine where to place ourselves by calculating the trajectory of the ball, taking into account velocity, wind speed, spin, and many other variables. But that is too complicated. Players on the ballfield unconsciously follow this simple heuristic: As they are running to catch the ball, they keep the angle of their gaze fixed on the ball and this will most often lead to a clean catch. So, forget ballistics—a simple heuristic will do the job.
How is the gaze heuristic a paradigm case of ecological rationality? Here is how the analogy is meant to work. Doing ballistics maps onto the decision-theoretic way of doing things: Examine all the available options; determine a utility function over the outcomes; and choose the action that maximizes utility. The gaze heuristic maps onto choice rules that real people use in real-world circumstances. Real people have limited computational capacities. And real-world circumstances often do not permit scanning all the available options: They are offered one-by-one and we need to act on those choices or let them pass as they come. So, what do we do then? Well we may use the following heuristic: Let a few options go by, set a threshold for acceptability, and choose the next option that exceeds the threshold. That is, in the words of Simon, we satisfice, rather than choose the option that has maximal utility.
Similarly, doing ballistics maps onto maximizing expected utility in the face of risk or to playing a Nash equilibrium in strategic contexts. In contrast, simple heuristics tell us what to do when it is not fitting to calculate expectations or when we may reasonably expect out-of-equilibrium cooperative play. The upshot is that for real-life people in real-life circumstances, following these simple heuristics will lead to better outcomes than determining what to do on grounds of rational choice models, just as following the gaze heuristic on the ballfield leads to better outcomes than calculating the trajectory of the ball as your ballistics textbook would want you to do.
But what does this have to do with nudging? Let us turn to some history. The Thaler and Sunstein nudge agenda builds on the work of Tversky and Kahneman. Tversky and Kahneman noticed that people are ‘intuitive grammarians’: Native speaker are able to unconsciously implement complex grammatical rules in their daily speech. But they are not so much ‘intuitive statisticians’: They make systematic errors in their probabilistic judgments. The same holds for rules of rational choice. The Heuristics and Biases research program consists in studying and taxonomizing these systematic errors of reasoning and agency.
From its early days, there have been many critiques of the Heuristics and Biases research program following this pattern: The so-called errors of reasoning and agency that Tversky and Kahneman identified really are not errors, but perfectly reasonable patterns of reasoning and agency for real people in real-world circumstances. This is where Gigerenzer’s Simple Heuristics fit in.
Default choices offer a nice example of this dynamic. When people are offered a set of options and one is designated as the default, they tend to go with the default. In the Heuristics and Biases research agenda, there is nothing about an option being the default that makes it choice-worthy. Rational agents should scan the available options and pick the best option by attending to all and only the relevant features, that is, the features that make the options more or less choice-worthy. But real people deviate from this rule of rational choice. There are various stories of why people stick to defaults, but what they all have in common is that they make choosing the default into a defect: People are lazy and can’t be bothered; They have some irrational attachment to the status quo; They conceive of the default as something they have in hand, and the endowment effect keeps them from moving away from it.
Subsequently, default choices move into the Simple Heuristics research agenda. They may be perfectly reasonable responses by real people in real-world environments. How so?
I may have good reason to believe that the default setter actually knows more about the issue than I do. Considering that I don’t have the time and resources to increase my expertise, why not go with the default? For this reason, I might sign onto the default for a pension savings plan.
Or maybe, I am facing a collective action problem and the cooperative choice is set as the default. If there were no default, then it would not do much good to be the lone person making the cooperative choice. But given that the cooperative choice is the default, I can reasonably expect many people to choose this way. And this is what makes the default attractive—it gives me the opportunity to be part of a cooperative venture. For this reason, I might sign onto the default of renewable energy.
Now let us return to the nudge agenda. Note that default choices did not play a role in the two paradigm examples of nudge we discussed earlier. But granted, default setting is huge in the nudge agenda. Schmidt chooses to focus on default choices because it’s easy enough to interpret this bias as a simple heuristic. Once we do this, then default setting no longer exploits a bias, but rather, builds on an entirely reasonable decision process by real people within real-world environments. What could be offensive about that? The nudging government is exculpated from the charge of not treating its citizens as rational agents.
- Two Objections
Schmidt makes a clever and original move in a well-executed article on a topic that has been well grazed by philosophers. Nonetheless, I would like to raise two objections.
First, we shouldn’t forget that Thaler and Sunstein’s nudge agenda grew out of the Tversky and Kahnement’s Heuristics and Biases research agenda. They conceived of nudge as designing the choice environment so that agents would come to do the right thing precisely because their reasoning and choice processes are error-prone. The critique that when government nudges its citizens, it does not treat them as rational agents, remains a valid critique, as the architects of the nudging agenda conceived of their creation.
But, of course, one might say, that’s all just history. We could perfectly well conceive of it in a different manner and then the critique would no longer stand. Fair enough, but I wonder whether we would still call it nudging.
Let’s go back to my child buying a bike. I am going to make up a simple heuristic for buying a quality bike—maybe there is something to it, but I am not enough of a connoisseur to vouch for it: If the brand of the derailleur is prominently displayed, you probably have a quality bike in hand. Now suppose that I know that my child follows this simple heuristic. Suppose that I make sure that the environment in which they make their choice is such that they won’t overlook this simple clue—e.g. I shine a spotlight on the derailleur or whatever may do the job. In this case, would I be nudging my child in this case toward making a good choice? You might say “a rose by any name,” but really, I would be hard-pressed to call this a case of nudging. I wouldn’t quite see it as on a par with instances of nudging in the Thaler and Sunstein’s work, nor would I call it ‘nudging’ in ordinary parlance.
Similarly, if default setting would be a tool that appeals to our rational nature, then I am not sure that we’d be happy to call it nudging. If a government pension fund makes an honest case that the default was set by an expert who has carefully considered the options on the basis of the best available evidence, then I don’t think they’d be nudging us toward choosing the default.
Second, it may be possible to reinterpret some biases within the Simple Heuristics framework, but we won’t be able to do it for all of them. To begin with, it’s not as straightforward to do it with the biases that are at work in the paradigm cases of nudge, namely, mindless choosing, the endowment effect and time-inconsistent preferences.
And there are even harder cases. Oxfam implemented the following nudge to increase charitable donations. You are offered a triptych of, say, a $18, $50 or $100 donation in an ad. There are various mechanisms at work. There is social-norm setting: if you were thinking about giving $10, know that that it is just not done. There is a Goldilocks effect that drives you toward the donation in the center. And there is the sheer unfamiliarity of the smallest number that biases you against choosing it—$18, who gives that? So, $50 is what you click on—whereas you probably would have made a skimpy $10 donation without the nudge. Government could take a lesson from the Oxfam textbook and do precisely the same to increase your contribution to your retirement savings. And one could well imagine that this would be a successful nudge.
Would Schmidt be able to avert the critique that government would not be treating its citizens as rational agents in a similar manner? Maybe a case could be made for social-norm setting, but could we make the Goldilocks effect and the repelling effect of unfamiliar numbers into a simple heuristic? I doubt it. And if this can’t be done, then the critique of government not treating its citizens as rational agents sticks for at least some nudges, and, I suspect, for the majority of the nudges in the current nudge agenda.
Acknowledgments: I am grateful for comments from Audra Jenson and Pavel Nitchovski.