Welcome to what we expect will be a very interesting and productive discussion of Andreas T. Schmidt’s “Getting Real on Rationality—Behavioral Science, Nudging, and Public Policy.” The paper is published in the most recent issue of Ethics, and is available here. Luc Bovens has kindly agreed to contribute a critical précis, and it appears immediately below. Please join in the discussion!

Luc Bovens writes:

A persistent and recurring critique of the nudge agenda is that a government that nudges its citizens does not treat them as rational agents. Andreas Schmidt defends nudging by arguing that it builds on ecologically rational processes. I will make a case for the critique, present Schmidt’s defense of nudging, and conclude with two objections.

  1. The Critique

Here is the critique in a nutshell. Cognitive scientists have identified a range of biases or mechanisms of human decision-making that fall short of ideal rationality. Citizens fail to realize their goals due to these biases. Nudge policies restructure the environment in which citizens make their choices in a manner that is sensitive to these biases. Within the restructured environment, the biases themselves will actually be conducive to better choices, that is, choices leading to outcomes that the citizens themselves prefer. In doing so, government exploits mechanisms of irrational decision-making—that is, it does not treat its citizens as rational agents.

Let’s think of the two paradigmatic nudge policies in this light.

In Cafeteria, we change the order of the food items in a high school cafeteria so that the healthy options are placed first. Students presumably want to make healthy food choices, but weakness of the will leads them astray and has them reach for the cheesecake rather than the salad. But students are also subject to mindless choosing—they are more prone to grab whatever comes first in line rather than scan all the available options. So, if we put the healthy options first, students will be more prone to choose these through the mechanism of mindless choosing.

In Save More Tomorrow, the nudge policy moves up the point of time at which employees are asked to increase their contribution towards their retirement savings. Employees presumably want to save more, but myopia gets the best of them. But they are also subject to the endowment effect and time-inconsistent preferences. On the endowment effect, people care more for money in hand than for money not in hand yet. On time-inconsistent preferences, when asked on Monday, people prefer $110 on Friday to $100 on Thursday, but when Thursday comes around, they prefer $100 on Thursday (that is, now) to $110 on Friday (that is, tomorrow). So, if we ask employees well ahead of time whether they want to commit part of their prospective raise toward their retirement, they are more prone to invest in their future than when they are asked after they have gotten the raise. This is the case because they don’t have the money in hand yet, and because they are not asked to bear the costs of their investment right now.

In both paradigm cases the nudge policy helps citizens to make choices that are better for them by their own lights. In the original choice environments, weakness of the will and myopia got the best of them and they made bad choices. By nudging the choice environment, mindless choosing, the endowment effect and timeinconsistent preferences conspired to make them do the right thing—namely, make healthy choices and increase their retirement savings.

Here is an analogy. I am planning to take my child to the bike shop. My child is old and wise enough to realize that they should purchase a high-quality bike. But they are attracted by bright colors and are going through a dinosaur phase to boot. I am scouting out the shop today and, as I had feared, there is a grey bike that is first-rate, and the bright colored ones are complete trash. With special permission from the shopkeeper, I put a Barney the Dinosaur fluffy animal on the grey bike. My child comes in, Barney catches their attention and we walk out with the grey bike. Mission accomplished.

Just like in a nudge, I have made a change to the choice environment so that the bias (a predilection for bright colors) that would have lured my child into making a bad choice is cancelled out by another bias (a love for anything that is associated with dinosaurs). I manipulated my child’s biases to make them do the right thing. They are old and wise enough to know that bright colors and being displayed with Barney are not good reasons to buy a bike. When asked what’s so good about the bike they will offer a technical explanation that puts me to shame. But without Barney—trust me, it wouldn’t have happened.

I think that this may be an OK way to treat my youngest one. And I know my kids: Mutatis mutandis, I may be able to pull off something similar with my oldest one. I am not an expert on child-rearing, but it strikes me that there is an age at which it would become disrespectful to follow this strategy. As my child grows older, I should teach them to focus on the features that make the options choice-worthy. I should treat them as rational decision-makers and if they fail in this regard, then I should try to strengthen their decision-making capacities rather than lure them into making the right choice. If this is how we should treat our children as they become capable of rational decision-making, then, a fortiori, this is how government should treat its citizens.

  1. Ecological Rationality

Schmidt argues that this critique misunderstands what nudging is all about. Nudging, he says, does not exploit mechanisms of choice that are lacking in some way or other, but rather works with mechanisms of choice that are ecologically rational—that is, they are optimally fitted given the cognitive and computational capacities of the chooser within the environment in which the choice is made. These ecologically rational choice mechanisms offer the best chances to accomplish our goals.

What are these ecologically rational choice mechanisms? The inspiration lies in Gigerenzer’s Simple Heuristics research program over the last three decades, which in turn builds on seminal papers by Herbert Simon in the 1950s. The paradigm case of ecological rationality is Gigerenzer’s gaze heuristic. When catching a ball, we could, in theory, determine where to place ourselves by calculating the trajectory of the ball, taking into account velocity, wind speed, spin, and many other variables. But that is too complicated. Players on the ballfield unconsciously follow this simple heuristic: As they are running to catch the ball, they keep the angle of their gaze fixed on the ball and this will most often lead to a clean catch. So, forget ballistics—a simple heuristic will do the job.

How is the gaze heuristic a paradigm case of ecological rationality? Here is how the analogy is meant to work. Doing ballistics maps onto the decision-theoretic way of doing things: Examine all the available options; determine a utility function over the outcomes; and choose the action that maximizes utility. The gaze heuristic maps onto choice rules that real people use in real-world circumstances. Real people have limited computational capacities. And real-world circumstances often do not permit scanning all the available options: They are offered one-by-one and we need to act on those choices or let them pass as they come. So, what do we do then? Well we may use the following heuristic: Let a few options go by, set a threshold for acceptability, and choose the next option that exceeds the threshold. That is, in the words of Simon, we satisfice, rather than choose the option that has maximal utility.

Similarly, doing ballistics maps onto maximizing expected utility in the face of risk or to playing a Nash equilibrium in strategic contexts. In contrast, simple heuristics tell us what to do when it is not fitting to calculate expectations or when we may reasonably expect out-of-equilibrium cooperative play. The upshot is that for real-life people in real-life circumstances, following these simple heuristics will lead to better outcomes than determining what to do on grounds of rational choice models, just as following the gaze heuristic on the ballfield leads to better outcomes than calculating the trajectory of the ball as your ballistics textbook would want you to do.

But what does this have to do with nudging? Let us turn to some history. The Thaler and Sunstein nudge agenda builds on the work of Tversky and Kahneman. Tversky and Kahneman noticed that people are ‘intuitive grammarians’: Native speaker are able to unconsciously implement complex grammatical rules in their daily speech. But they are not so much ‘intuitive statisticians’: They make systematic errors in their probabilistic judgments. The same holds for rules of rational choice. The Heuristics and Biases research program consists in studying and taxonomizing these systematic errors of reasoning and agency.

From its early days, there have been many critiques of the Heuristics and Biases research program following this pattern: The so-called errors of reasoning and agency that Tversky and Kahneman identified really are not errors, but perfectly reasonable patterns of reasoning and agency for real people in real-world circumstances. This is where Gigerenzer’s Simple Heuristics fit in.

Default choices offer a nice example of this dynamic. When people are offered a set of options and one is designated as the default, they tend to go with the default. In the Heuristics and Biases research agenda, there is nothing about an option being the default that makes it choice-worthy. Rational agents should scan the available options and pick the best option by attending to all and only the relevant features, that is, the features that make the options more or less choice-worthy. But real people deviate from this rule of rational choice. There are various stories of why people stick to defaults, but what they all have in common is that they make choosing the default into a defect: People are lazy and can’t be bothered; They have some irrational attachment to the status quo; They conceive of the default as something they have in hand, and the endowment effect keeps them from moving away from it.

Subsequently, default choices move into the Simple Heuristics research agenda. They may be perfectly reasonable responses by real people in real-world environments. How so?

I may have good reason to believe that the default setter actually knows more about the issue than I do. Considering that I don’t have the time and resources to increase my expertise, why not go with the default? For this reason, I might sign onto the default for a pension savings plan.

Or maybe, I am facing a collective action problem and the cooperative choice is set as the default. If there were no default, then it would not do much good to be the lone person making the cooperative choice. But given that the cooperative choice is the default, I can reasonably expect many people to choose this way. And this is what makes the default attractive—it gives me the opportunity to be part of a cooperative venture. For this reason, I might sign onto the default of renewable energy.

Now let us return to the nudge agenda. Note that default choices did not play a role in the two paradigm examples of nudge we discussed earlier. But granted, default setting is huge in the nudge agenda. Schmidt chooses to focus on default choices because it’s easy enough to interpret this bias as a simple heuristic. Once we do this, then default setting no longer exploits a bias, but rather, builds on an entirely reasonable decision process by real people within real-world environments. What could be offensive about that? The nudging government is exculpated from the charge of not treating its citizens as rational agents.

  1. Two Objections

Schmidt makes a clever and original move in a well-executed article on a topic that has been well grazed by philosophers. Nonetheless, I would like to raise two objections.

First, we shouldn’t forget that Thaler and Sunstein’s nudge agenda grew out of the Tversky and Kahnement’s Heuristics and Biases research agenda. They conceived of nudge as designing the choice environment so that agents would come to do the right thing precisely because their reasoning and choice processes are error-prone. The critique that when government nudges its citizens, it does not treat them as rational agents, remains a valid critique, as the architects of the nudging agenda conceived of their creation.

But, of course, one might say, that’s all just history. We could perfectly well conceive of it in a different manner and then the critique would no longer stand. Fair enough, but I wonder whether we would still call it nudging.

Let’s go back to my child buying a bike. I am going to make up a simple heuristic for buying a quality bike—maybe there is something to it, but I am not enough of a connoisseur to vouch for it: If the brand of the derailleur is prominently displayed, you probably have a quality bike in hand. Now suppose that I know that my child follows this simple heuristic. Suppose that I make sure that the environment in which they make their choice is such that they won’t overlook this simple clue—e.g. I shine a spotlight on the derailleur or whatever may do the job. In this case, would I be nudging my child in this case toward making a good choice? You might say “a rose by any name,” but really, I would be hard-pressed to call this a case of nudging. I wouldn’t quite see it as on a par with instances of nudging in the Thaler and Sunstein’s work, nor would I call it ‘nudging’ in ordinary parlance.

Similarly, if default setting would be a tool that appeals to our rational nature, then I am not sure that we’d be happy to call it nudging. If a government pension fund makes an honest case that the default was set by an expert who has carefully considered the options on the basis of the best available evidence, then I don’t think they’d be nudging us toward choosing the default.

Second, it may be possible to reinterpret some biases within the Simple Heuristics framework, but we won’t be able to do it for all of them. To begin with, it’s not as straightforward to do it with the biases that are at work in the paradigm cases of nudge, namely, mindless choosing, the endowment effect and time-inconsistent preferences.

And there are even harder cases. Oxfam implemented the following nudge to increase charitable donations. You are offered a triptych of, say, a $18, $50 or $100 donation in an ad. There are various mechanisms at work. There is social-norm setting: if you were thinking about giving $10, know that that it is just not done. There is a Goldilocks effect that drives you toward the donation in the center. And there is the sheer unfamiliarity of the smallest number that biases you against choosing it—$18, who gives that? So, $50 is what you click on—whereas you probably would have made a skimpy $10 donation without the nudge. Government could take a lesson from the Oxfam textbook and do precisely the same to increase your contribution to your retirement savings. And one could well imagine that this would be a successful nudge.

Would Schmidt be able to avert the critique that government would not be treating its citizens as rational agents in a similar manner? Maybe a case could be made for social-norm setting, but could we make the Goldilocks effect and the repelling effect of unfamiliar numbers into a simple heuristic? I doubt it. And if this can’t be done, then the critique of government not treating its citizens as rational agents sticks for at least some nudges, and, I suspect, for the majority of the nudges in the current nudge agenda.

Acknowledgments: I am grateful for comments from Audra Jenson and Pavel Nitchovski.

19 Replies to “Andreas T. Schmidt: “Getting Real on Rationality—Behavioral Science, Nudging, and Public Policy”. Précis by Luc Bovens

  1. Thanks to Luc for this precis and these stimulating and thoughtful comments!

    In this response, I will discuss three of Bovens’ worries.

    WORRY 1: Can the endowment effect, time-inconsistent preferences and mindless choosing ever be rational?

    Ecological rationality, as I defend it in my article, is the claim that decision-making procedures are rational relative to an environment, agent, and decision-making problem. For many so-called biases, we can find contexts in which using such decision-making procedures makes a lot of sense. But Bovens worries that “it may be possible to reinterpret some biases within the Simple Heuristics framework, but we won’t be able to do it for all of them. To begin with, it’s not as straightforward to do it with the biases that are at work in the paradigm cases of nudge, namely, mindless choosing, the endowment effect and time-inconsistent preferences.”

    My first response here is: challenge accepted. Contrary to what Bovens suggests, I think my argument applies to these decision-making procedures too.

    Consider the endowment effect first. Let’s stipulate that you display the endowment effect, if you are willing to pay €X for an object but, once you own it, are not willing to sell that object at price €X + €Y, where Y is greater than zero. Outside abstract models in economics, such behaviour can be perfectly rational. For example, G.A. Cohen defends a philosophically sophisticated and substantial case for sometimes endorsing the endowment effect (Cohen 2013). Cohen argues that we often have reason to value an object that we own, or that has been with us for a while, more than an equivalent new object. What’s more, such valuing is often the normatively appropriate attitude towards such objects. Cohen, as far as I understand him, argues such conservatism is in some sense intrinsically valuable. That may or may not be the case. But even just attending to instrumental benefits, we would still have a good prudential justification for sometimes displaying the endowment effect. For example, objects have sentimental value, connect with our memories, the people we met, the places we have visited and so on. If we are disposed to flog them for small economic benefit, we might lose something of value. Moreover, sometimes you might have bought something in the past and bought it at a price which seemed roughly right. But you are not entirely sure where exactly your cut-off point would have been, maybe you would have been willing to pay more than €X + €Y. Or maybe, to put it philosophically, you simply can’t be bothered to think about selling your stuff. You are not on Antiques Roadshow and find it odious to think about selling your possessions and how much your possessions are worth to you. So, going with the disposition to stick with your possessions, unless a really good offer comes along, does not seem irrational.

    Consider Bovens’ second example, mindless choosing. Again, in many contexts, I think a disposition towards mindless choosing is rational. Should you regularly reflect on the reasons for tying your left shoe first? Or think about deliberately integrating mindless choosing into your choice environments. As president, Barack Obama is said to have had only suits, shirts and ties that matched across permutations. Obama’s theory was that being able to choose his attire mindlessly would free up cognitive energy for the other important decisions a regular day would throw at him. (Once he deviated from this and wore a tan suit. At the time, such behaviour was ‘unpresidential’ enough to cause a mild ‘scandal’ – simpler times I guess.)

    Finally, consider Bovens’ third mechanism, time-inconsistent preferences and hyperbolic discounting. Unlike exponential discounting, hyperbolic discounting does not assume a constant discount factor across time but instead uses a higher discount rate for temporal intervals closer to the present than for those further into the future. Hyperbolic discounting leads to time-inconsistent preferences which, as Bovens claims, seem rather irrational. But in real-life contexts, such preferences could be rational, when you consider uncertainty for example. Imagine I offer you €90 today or €100 next week. You prefer the €90 now. I also offer you €90 six months from now or €100 six months from now plus one week. Here you prefer the €100. If you consider the risk that I might not actually pay out the money I promise and if you are somewhat uncertain about that risk, these time-inconsistent preferences are not so irrational. If I pay out €90 six months from now, it’s quite likely I would have also paid out €100 a week later. It might thus make sense to take the risk. But when I offer you €90 now versus €100 in one week, the risk of my not paying might be higher, particularly in an environment with unreliable people around (for a more formal treatment see (Sozou P. D. 1998)).

    So, I think the point I make stands, even for the decision-making procedures Bovens thinks are tricky.

    WORRY 2: INTUITIONS

    A second argumentative strategy Bovens pursues relies more on intuition. In Cafeteria, Save More Tomorrow, the bike example, and the Oxfam example, my systematic arguments might do little to remove the gnawing feeling that something fishy is going on here.

    To respond, let me first clarify what my argument is meant to achieve. I argue that nudging is not in principle committed to treating agents as irrational. Contrary to what many critics claim, the nudge approach does not imply treating agents as irrational. But I do not argue that all instances of nudging are thereby freed of the charge of treating agents as irrational – or from several other ethical objections for that matter.

    Accordingly, I could respond to Bovens that his examples are not a problem for my argument, as I did not try to show that all nudging instances are free of moral qualms. The Oxfam nudge, for example, does strike me as problematic. In Bovens’ bike example, the way he describes it, I would also agree that it seems a little inappropriate to simply trick your child into buying something by manipulating them through dinosaur toys. But note that ecological rationality can make sense of this intuition. Ecological rationality sometimes favours an educational approach over nudging, for example, when it is efficient and would help agents become more robustly rational. Ecological rationality cares both about one’s environment and one’s decision-making procedures. And the way Bovens describes the case, it sounds to me one would have the option to talk to one’s child and convince them to adopt a more sophisticated decision-making procedure. A parent who just opted for the easy option with the dinosaur could, and possibly should, do better.

    Now, a final comment on the Cafeteria example. In such cases, it is always important to think about the counterfactual. Maybe it might turn out that if we did not rearrange the order in the cafeteria, the cafeteria owners would arrange things so as to push their unhealthiest products with the highest margins. In such instances, ‘positive’ nudge interventions can correct choice environments that would otherwise be worse. Alongside many other companies – such as internet companies, smartphone apps, casinos – food companies and restaurant chains use behavioural techniques in ways that increase their profits but not necessarily track consumer interests. When government steps in to prevent bad and deliberately ill-matched environments, this does not turn consumers into super rational actors. But it can sometimes prevent bad mismatches and thereby improve human agency.

    WORRY 3: IS THIS STILL NUDGING?

    Finally, I also argue in my article that many nudges need not (exclusively) rely on System 1 mechanisms but also bring in System 2 mechanisms, that is, deliberative elements. The example I gave were organ donor nudges accompanied by much political debate and transparency. Now, Bovens responds that “if default setting would be a tool that appeals to our rational nature, then I am not sure that we’d be happy to call it nudging. If a government pension fund makes an honest case that the default was set by an expert who has carefully considered the options on the basis of the best available evidence, then I don’t think they’d be nudging us toward choosing the default.”

    One problem is that Bovens implicitly takes ‘appeals to our rational nature’ to mean that it must appeal to our deliberative decision-making capacities (‘System 2’). But I of course argue that we should not hold that rationality exclusively resides in the slow, deliberative modes of decision-making. More automatic and intuitive decision modes can be perfectly rational too. So, bringing in deliberation, information and System 2 is not always necessary. But setting this quibble aside, I am suspicious about Bovens’ claim that we shouldn’t label as ‘nudges’ those interventions that combine System 1 and 2. For example, quite a few nudges combine default setting plus information. And for those that don’t, we can frequently imagine changing them in a way that makes them and their underlying reasons more transparent (Schmidt 2017). I am not moved much by Bovens’ worry about this definitional issue.

    But Bovens offers a further reason why such interventions, and my invocation of ecological rationality more generally, might fall outside the nudge agenda. Historically, the nudge programme is intellectually tied to the idea that humans are subject to biases that, much like optical illusions, lead them astray. So, given its intellectual origins, am I still talking about nudging, when I claim that nudging can extricate itself from implicit commitments to irrationality?

    One obvious response is that an historical association between nudging and the biases and heuristics programme need not be a necessary part of what the programme is now. So, does Bovens commit a ‘genetic fallacy’? Not entirely. It might well be that because of the historic connection, many proponents and practitioners of nudging still adhere to the bias view of human irrationality. But it would be a genetic fallacy if we held that any historical association between nudging and the biases and heuristics programme determines what the nudge programme should be going forward. One take-away from my article is that by engaging in choice architecture one is neither intellectually nor practically committed to thinking human agency is riddled with biases and irrationality. If practitioners think that way, then I would advise them not to. Moving towards an ecological ideal allows us to keep what is valuable about nudging. Note also how this, implicitly, serves as a critique of some of the claims coming from authors writing on ecological rationality: Gerd Gigerenzer, for example, describes nudging as being committed to viewing agents as irrational. But my article shows that such a connection, if it exists at all, is merely contingent but not necessary. We can keep most of the good stuff from the nudge approach whilst also adopting a more attractive view of human agency.

    Thanks again Luc for these great comments!


    References
    Cohen, G.A. 2013. “Rescuing Conservatism: A Defense of Existing Value.” In Finding Oneself in the Other. Princeton University Press.
    Schmidt, Andreas T. 2017. “The Power to Nudge.” American Political Science Review 111 (2): 404–17. https://doi.org/10.1017/S0003055417000028.
    Sozou P. D. 1998. “On Hyperbolic Discounting and Uncertain Hazard Rates.” Proceedings of the Royal Society of London. Series B: Biological Sciences 265 (1409): 2015–20. https://doi.org/10.1098/rspb.1998.0534.

  2. Thanks, Luc and Andreas, for the interesting discussion! Great stuff! I guess I tend to side with Andreas on most of these points but just want to stress the following basic but fundamental point. To what extent nudgers treat nudgees as rational agents or not and to what extent specific nudging techniques promote, respect or undermine rationality depends not only the techniques involved, but obviously on what one means with ‘rationality’.

    In the discussions above, there are a couple conceptions of rationality at play, some of which are explicitly defined but some remain quite implicit. You’ve got ecological rationality, rationality as making optimal decisions, rationality as the kind of deliberative and reflective decision-making processes that characterize System 2, rationality as reason-sensitivity (rationality implies ignoring irrelevant aspects of choice architecture), et cetera. Rationality is one of those terms where any kind of discussion quickly runs wild because people got different ideas about what it means and these often depend on people’s backgrounds (decision theorists, economists, philosophers … are all situated in long traditions in which rationality has all these diverging meanings).

    Whatever conception one favors, though, I think the main point here is to _always_ compare nudging to its alternatives. So take the ‘rationality as reflectiveness’ conception. Sometimes, presenting people with reasons works, makes people think and convinces them to do make the ‘right’ choice (whatever that may be, let’s focus on health here). But this does not always work and in those cases, it may seem we inevitably face a trade off between rationality and health. If we decide to nudge people towards healthier choices, that can only work at the cost of some loss in rationality.

    However, that’s a non sequitur. The big question, after all, is: what is the alternative? Stop nudging? But that doesn’t in some sense magically restore reflectiveness? If Luc wouldn’t have put Barney the Dinosaur on the excellent bike, his child would have picked a flashy but crappy bike. The same goes for a lot of adults who are susceptible to these heuristics, whether we nudge them or not! So if you say ‘stop nudging as this does not treat people as rational agents’, the question is of course: ‘what does treat people as rational agents?’. Just giving them reasons, knowing that quite often, this will have no impact and some heuristic will still influence their choices?

    Which kind of presentation of bikes (bike shop) or donation amounts (Oxfam) is the one that somehow ‘respects people’s rationality’? Only the one that provides the ‘relevant features’? I don’t quite know what that would look like.

    So sure, I guess that nobody is against treating people as rational agents if this means trying to trigger and/or boost their reflective decision-making processes. The big question is what to do when we know that this is unlikely to work, regardless whether we nudge or not. We can nudge, tapping into the heuristics that will be at play anyway, in the hope of influencing people’s decisions for the good. Or we can refrain from nudging and leave them subject to a random choice architecture (or one that is intentionally designed by e.g. companies out to make a profit). The latter option doesn’t promote anything, not even people’s rationality.

  3. By the way, Luc, is your Barney the Dinosaur case not one in which you use a nudge to promote a kind of ‘outcome rationality’? You steer your child (on the basis of an a-rational heuristic, so not a reason) to choose the best option that he would choose if he were fully rational (understood as some kind of idealized rationality or even as ‘reasons-responsiveness’. I have argued elsewhere (https://journals.sagepub.com/doi/full/10.1177/1043463119846743) that a lot of nudges promote such ‘outcome rationality’ while not decreasing people’s ‘process rationality’. As explained before, the decision-making processes, heuristics included, often remain the same, regardless whether someone intentionally taps into them or not (Luc’s child will be moved by irrelevant features, even in the absence of Barney).

  4. Thank you Andi for the great paper. And Luc for the thoughtful comments. I am also very sympathetic to Andi’s position. I agree with Bart that it is important whenever we are evaluating a particular proposal about rational deliberation consists in to do so in light of the alternatives. Of course, in each particular case, there is often an alternative that does better than the heuristic, but the point of the heuristic is that it works best (gets the right outcomes) most of the time for cognitively limited agents like ourselves. So, for example, teaching your child to use the Barney heuristic is not going to satisfy this standard because in general that is a pretty bad heuristic to use. If we think of deliberation as a decision-making tool, then the question of how we should deliberate to achieve our ends is in a way simply empirical. You might want to teach your kid a more complicated decision-making procedure when it comes to purchasing bikes because it will be useful in future bike-purchasing instances But there will be a limit here too. You probably don’t want your child to use the decision-making procedures that takes hours to apply and detracts from other better uses of their cognitive capacities. But there is a much deeper disagreement here. I think the intuition that drives the rejection of this view of deliberation is a rejection of the standard of assessment that reduces its goodness to instrumental value. But such views have to offer non-question begging arguments for employing other standards of evaluation. Why should an agent go for an alternative procedure that leads one to make worst choices? I’m not suggesting that there is no argument here, simply pointing out that such an argument will have to be very compelling to make the case.

  5. Thanks Bart and Jen for your interesting comments! Bart, I will tease out some disagreements, although they are not very strong, and pose a challenge for both our views and for Jen’s view potentially too. Jen, I fully agree with what you say in your comment.

    First, I would agree with you Bart that nudges sometimes promote outcome rationality whilst also leaving procedural rationality intact, as the counterfactual of not nudging would leave people to be just as procedurally irrational/rational. Just to clarify, I make the stronger point that at least sometimes, nudges can do you one better and promote both outcome rationality and procedural rationality. Now while promoting outcome rationality in a particular decision is not sufficient for promoting procedural rationality, on my view of rationality, there is of course a much stronger link between outcome rationality and procedural rationality. Decision-making is rational because it somewhat reliably yields good decisions relative to an environment. So, in some cases, when choice architecture picks up on people’s decision-making procedures in a way that reliably furthers their end, it not only leaves their procedural rationality intact, it might help promote it. This makes certain kinds of nudges, but also educational measures for example, additionally attractive. Moreover, as Jen points out, it allows us to distinguish between interventions that merely lead to a one-off improvement in outcome rationality and those that improve rationality and help agents make better decisions in an environment somewhat more reliably .

    Second, I was wondering how you would respond to the following objection an opponent could make to your view, also in part because someone made that objection to me in a Q&A. An opponent might think your argument, as well as mine, works only if we think of rationality, first, as a kind of good one should promote and, second, a good that the policy-maker has relational standing to promote. But an opponent might hold that duties and deontic permissions here are more relation-specific and agent-relative: for example, the state should respect individuals and their decisions in a way that would make it inappropriate for the state to intervene in order to promote rationality. Ian Carter, for example, argues that the state should treat people and their decisions as opaque such that, as long as they are above a certain threshold of rationality, the state should not second-guess individuals or try to correct their behaviour (Carter 2011). For example, even if the state knows that Peter is in a romantic relationship that hampers his rational agency and where he is being treated in infantilizing ways, the state should not try to intervene and correct for this irrationality. Instead the state should treat Peter as if he were a fully rational adult, even if intervening improved his rationality. Similarly, your employer or a random person off the street shouldn’t try to make you less irrational in your health-related choices.

    I haven’t worked out a proper response, so I wonder what you make of the following. The above question of whether someone has standing to try to improve someone’s behaviour is somewhat orthogonal to the question of how they should do so. Values other than respecting agents as rational – like privacy, relational equality, liberal freedom, non-domination as well as just not annoying people – might explain why some people have standing to engage in choice architecture whereas others do not. Relatedly, it will be about the kinds of goods and spheres we consider: personal relationships are typically intimate and private but behaviour relating to health and the environment, for example, often are not. Relatedly, the question as to whether someone has standing to intervene would not go away if instead of nudging, one were to use ‘rational persuasion’ (which many falsely think is always preferable on grounds of respect). Would it be so different if the state always send you information packages about intimate choices? Or if strangers on the street approached you with things like ‘I have read some Cochrane meta-reviews and here are three arguments why you should do mindfulness meditation/exercise more etc.: first, …, second,… ’ So, if you then buy my argument – inter alia, that things other than persuasion can also respect and promote people’s rationality – then there is no rationality-based argument against nudging and paternalism that uses behavioural policies. Objections to those interventions, and to paternalism more generally, have to come from elsewhere.

    Finally, Bart, while I agree that there are many different concepts of rationality (like non-normative models for building models in economics, content rationality, and procedural rationality), I still want to argue that Ecological Rationality is the correct conception of (normative) procedural rationality and preferable over normative rational choice models and ‘responding to reasons models’ of procedural rationality. I provide some arguments in the paper (pp. 517-20). Plus I think Jen’s argument in her comment – and in more detail in one of her articles – is central too: the ecological, instrumental model has a more convincing story to account for the normative source of rationality (and the ‘why be rational?’ question) (Morton 2011).

    References
    Carter, Ian. 2011. “Respect and the Basis of Equality.” Ethics 121 (3): 538–571.
    Morton, Jennifer M. 2011. “Toward an Ecological Theory of the Norms of Practical Deliberation.” European Journal of Philosophy 19 (4): 561–84. https://doi.org/10.1111/j.1468-0378.2010.00400.x.

  6. Andi, I find your response to the objection as you lay it out persuasive. But what about this other way of framing the question: Does the state have an obligation to treat you as if you were deliberating in a less cognitively limited way than you, in fact, are? One might think that the state does have such an obligation for a number of reasons. The most obvious one is the concern for autonomy that you carefully spell out in your paper. Another (not very good reason, I think) might be that in doing that, the state makes it more likely that you will engage in more sophisticated deliberation. But yet another, which you discuss a bit in the paper, is that there are some core features of democracy or state legitimacy that require that the state have this more idealized view of your capacities. For example, it is well-established that people’s views on a number of political questions are subject to all sorts of ‘irrelevant’ influences. But it would be impermissible (the objector argues) for the state to design policies that depend on seeing its citizens in that way. This circles back to the relational idea that there is something about respecting the other that might require us to treat them as if they were more ideal then they in fact are. And there might be a Strawsonian point lurking here–treating people as if they were capable of a more idealized form of deliberation is a form of seeing them as participants rather than as objects to be explained. An explanation why this might be would appeal to the fact that we hold ourselves to a more idealized form of rationality than we are often capable of actually exhibiting. We shouldn’t treat others with less ‘respect’ than that.

    I agree that “the question of whether someone has standing to try to improve someone’s behaviour is somewhat orthogonal to the question of how they should do so” but I think someone might find persuasive the argument that the state, in particular, ought to treat its citizens as having more idealized cognitive capacities than they in fact do.

  7. Thanks, Andreas and Jen, for your thoughts. First of all, I agree that some nudges, in some circumstances, can promote both process/procedural rationality and outcome rationality. In fact, some nudges, in some circumstances, can violate both, when they trigger a-rational heuristics that also do not make good (ecological) sense in that they are ‘maladapted’ to the circumstance at hand. And, yes, the distinction between a one-off impact and effects on the longer run (with people learning to use specific heuristics in specific circumstances) is an important one.

    As for Carter’s objection, if I understand it correctly, it treats rationality as something valuable but not something to be promoted (at least among adults whom are due respect). I can see the intuitive force of that actually: in as far as rationality is something we (or the state) want to promote, this is mostly relevant for very specific kind of institutions and circumstances (such as education, obviously, but also perhaps when it comes to democratic procedures). I don’t think promoting rationality is a sensible policy goal when it comes to people engaging in traffic, consumption, leisure, decisions about energy, contracts or even health care, … In those cases, it makes to ensure that there is a minimal threshold of rationality (e.g. something like informed consent) without necessarily aiming to promote or maximize rationality.

    Of course, on an ecological conception of rationality, it is never bad to promote rationality, since ‘ecologically rational’ simply means ‘produces good decisions in the circumstances at hand’. So on this approach, there hardly ever is a trade off between outcome and process rationality as they seem to converge, right?

    That said, I agree very much with you, Andreas, that the question how to treat people depends heavily on the standing of the agents involved and the domains or spheres in life at hand. This, I think, is a thought that is still largely lacking in the ethics of nudging literature. What exactly is the standing of the nudger, what is her relation to the nudgee and what, in the context of their relationship, is the appropriate kind of behavior? This is often the right question to ask but a hard one to answer in abstracto. I guess it all depends on the specifics of each situation, right? The state can legitimately try to promote specific kinds of choices that some people would regard as intimate (e.g. stimulate teens to take birth control measures, …). In some cases, I welcome my mom nudging me but in other cases, I insist on making my own mistakes (and ignoring whatever my mom says). The role that rationality should play in these different domains and for different agents is also hard to determine in some kind of general way.

    Perhaps it makes sense to take an ‘intervention ladder’ approach: first try to rationally persuade, give reasons, … and if that does not help (and you think the policy goal is important enough to engage in slightly more intrusive measures), you can try to nudge (or incentivize) people, … and if that does not help, you can try more coercive measures. I think this is often we do implicitly in social life anyway. If I know that merely informing my colleague will do the trick (and get her to make the kind of decision I think are good), I’ll do that. If that doesn’t work, I might try to make an option more salient. If that still doesn’t work, an incentive or sanction/reward might do the trick. In general, I guess, this approach would make sense in quite a lot of spheres and relationships.

    If you think rationality is crucial in a specific circumstance (such as an educational setting), you might want to avoid specific kind of nudges and invest more heavily in rational persuasion or, when it is deemed necessary, go for coercion (which violates liberty but generally not rationality).

    And, yes, always trying to rationally persuade people can be very annoying, and even harmful (if you assume that people only have limited cognitive bandwidth that they should be able to spend as they see fit).

  8. Bart, I don’t think that a lot of theorists of education actually think of rationality as a central aim in educational settings. Of course, you want to teach students to think critically and so forth, but in elementary education, for instance, you want to teach students to have certain dispositions–kindness, empathy, patience, curiosity–that are not straightforwardly derivable from rationality or achieved through rational persuasion. My point isn’t simply that at younger ages rational persuasion fails, but rather than an important part of educating a citizen is to cultivate in them dispositions and habits that are generally good for them to have in a non system 2 way. First and foremost I want my child to grow up to be a kind person, not one that deliberates about the rationality of showing kindness in a particular case. The literature on citizenship education takes deliberation to be one facet of a host of other dispositions that make for good citizenship–tolerance, empathy, cooperativeness, etc.

  9. Completely agreed, Jen. A ‘virtue ethics’ or ‘character education’ approach to education makes sense. But I guess that the disposition / ability to think critically and systematically is something to be cultivated in education (next to these other kinds of dispositions) in a way that is not in other domains.

  10. Thanks Jen! To pick up on your earlier suggestion and how you suggest to reframe it: ‘there are some core features of democracy or state legitimacy that require that the state have this more idealized view of your capacities. For example, it is well-established that people’s views on a number of political questions are subject to all sorts of ‘irrelevant’ influences. But it would be impermissible (the objector argues) for the state to design policies that depend on seeing its citizens in that way.’

    Let me have a try at making a stronger response within the ecological framework to these sorts of views. The response overall goes like: once we reject the view that rationality is exclusively about the ‘deliberative, System 2 ideal’ – once we reject ‘heroic rationality’ – the duty to treat agents as if they were fully heroically rational makes much less sense and can sometimes be spectacularly counterproductive.

    I think what is nice about my view is that it allows policy-makers, politicians and the wider public to treat people as they are. And they can do so without having to look down on people: people are in fact often much better at making decisions than the heroic ideal gives them credit for. What seem to be shortcomings on the heroic deliberative view of rationality are often not shortcomings on the ecological ideal but useful decision-making procedures. In some ways, once we move away from heroic rationality, we not only have a more realistic view of real agents but also a more positive one. This would really help with the ‘respect bit’ that partly motivates the ‘let’s idealise and treat people as if they were fully heroically rational argument’.

    A second argument would be that taking agents as they are would be beneficial. Conversely, idealising real people into heroically rational agents can be highly counterproductive. For policies, getting real on rationality will allow you to design policies for real people, rather than policies that work for fictitious Kantians or utility maximisers. At the same time, one does not have to shy away from using evidence and reason in publicly justifying them and from providing the conditions for democratic control. Relatedly, and on the topic of democratic deliberation and legitimacy: Jen, I think you are right in saying that in this context the commitment to more deliberative modes of reasoning is often presumed to be the right one including a commitment to ‘idealising’ citizens as very informed and competent reasoners on political matters and such. I think this can be seen in political science, which for a long time tried to reconstruct how voting decisions rationally tracked policy preferences, which – it turns out – they usually don’t. Or it can be seen in how political discourse, I think, requires politicians, academics and commentators to pay lip service to how wise voters are. Norms to pretend people know everything, or even enough, about politics, economics, law and policy can have bad effects, or at least I think there is some anecdotal evidence for this. For example, being against a referendum, like the Brexit referendum in the UK, is easily reframed as thinking that the public is not intelligent enough to make such a choice. Of course, it is ridiculous to expect people to competently predict what would happen after a decision like the Brexit vote. On a realistic ecological model, I think, one would not have to pretend that they do know everything, or even enough, about a question as complex as Brexit. In fact, as Jason Brennan argues convincingly I think, given normal people’s ends, preferences and the financial and time constraints they have to deal with, it would indeed be highly irrational for most people to spend so much time and effort to know everything about things like EU law. Another negative effect might also be, again anecdotal, that citizens themselves must pretend that they know more than they do (which they typically do) and rationalise their preferences and voting decisions. I found (Achen and Bartels 2017) to be very good on this, particularly their chapter ‘It Feels Like We’re Thinking: The Rationalizing Voter’.

    I got a little carried away here, not at all animated by disillusionment with recent political events of course. But I think the point stands that commitment to idealising towards ‘rationalistic’ deliberative rationality can have lots of negative consequences. At the same time, I think a realistic ecological ideal is compatible with making room for the concerns that Carter and others would raise, for example, by being committed to privacy, freedom, non-domination, to non-judgemental approaches to policy and by sometimes simply abstaining from enquiring too much into how people make decisions in some areas of their lives (like how individual citizens behave around their friends for example).

  11. I’m late to the party but wanted to chime in. I’m also mostly in agreement with Andreas and Ben. (Shameless plug: I argue for a similar conclusion in a recent paper in the Journal of Medicine and Philosophy). One thing I’d like to add here is that it’s important to keep separate issues of rationality and issues of respect. Of course, interfering with people’s rational decision-making is one way to fail to respect agents (or their agency or autonomy). But it’s not the only way. And I agree that a lot nudging does not interfere with rational decision-making. So I don’t think nudging involves a failure to respect agents in virtue of its effect on their rationality.

    Andreas says: “Would it be so different if the state always send you information packages about intimate choices? Or if strangers on the street approached you with things like ‘I have read some Cochrane meta-reviews and here are three arguments why you should do mindfulness meditation/exercise more etc.: first, …, second,… ’ So, if you then buy my argument – inter alia, that things other than persuasion can also respect and promote people’s rationality – then there is no rationality-based argument against nudging and paternalism that uses behavioural policies. Objections to those interventions, and to paternalism more generally, have to come from elsewhere.”

    I want to suggest where else objections to nudging might be coming from. Perhaps the concern about interference with rationality are, more fundamentally, concerns about respecting agents. In short, people think the government shouldn’t be in the business of influencing people’s decisions—whether or not that influence is rational. Call this the “it’s none of your business!” objection to nudging. But like Andreas says, it’s not a rationality-based objection.

    Now I’m not saying that nudging actually involves a failure to respect agents in this way. In fact, I’m not even sure I understand what “respecting agency or autonomy” means in most philosophical contexts. (Forgive me, Kant.) But I suspect this might be underlying some people’s intuitions about the problem with nudging.

  12. Thanks Bart for your follow-up comments!

    Here is a quick comment on whether we should promote – rather than often just ‘respect’ – rationality. Let me double down here by using the ecological ideal. I think that while it is probably true that the state shouldn’t try to promote rational agency across all domains, it still seems much more plausible than potential opponents might think that the state and many other societal actors should promote rational agency in many policy contexts. Again, I think this becomes much more plausible once we move from a ‘deliberative’ heroic ideal of rationality towards an ecological view. Here are some arguments.

    First, a conceptual argument would go like this: ecological rationality, by definition, helps with achieving certain valuable ends. If a policy-maker is tasked with helping people further those ends, then it typically seems like a good idea to bolster their rational agency in achieving those ends. Sometimes, not always of course, achieving ends through behaviour change will automatically bolster rational agency.

    Second, in many contexts, achieving certain ends over some time requires people becoming better agents in the ecological sense. For example, many medical treatments go beyond just one-time treatments at the clinic or a doctor’s practice. This is particularly obvious for chronic conditions. Such treatments require patients to behave certain ways in their day to day lives, like take their medication, do their exercises, engage in CBT etc. Much attention is now spent in medicine, as far as I can tell, on thinking about how to improve patient behaviour to increase treatment effectiveness. Such things can include helping patients develop good habits, acquire certain heuristics, set up their environment, use technical support (like apps), integrate health behaviour into their social context, use commitment devices etc. The point is that ‘achieving (policy) ends’ often comes together with promoting agency because it is required for achieving those ends.

    Third, a liberal ideal around autonomy further supports the idea that we have reason to bolster people’s rational agency, because it typically tends to further their autonomy.

    Fourth, there are some direct benefits to being a better agent beyond the fulfilled ends. For example, the psychological literature on personal control thinks that viewing yourself as an agent and being somewhat in control, rather than being helpless, has psychological benefits.

    Finally, promoting rational agency in the ecological sense is very broad and flexible. For example, in education, it does not mean turning people into hyper-deliberators. Cultivating a broad range of skills and virtues – including critical thinking, statistical competence but also kindness, social skills, and good intuition – can often contribute towards ecological rationality. People typically have ends in social contexts and so many of those virtues and skills will be highly conducive to operating successfully in social contexts.

    Bart, you probably agree with most of this 😉 But I thought it’s worth exploring how promoting rational agency becomes much less weird once we move away from heroic rationality towards ecological rationality. Promoting rational agency here becomes much broader than promoting ‘heroic rationality’ and it becomes clear why promoting rationality is valuable.

  13. I do agree with you, Andreas and that also holds for Timothy’s remark that a lot of the worries people have with nudging is based on a ‘it’s none of your business’ attitude (to me it’s interesting to see that this attitude is often more pronounced when it comes to governments using nudging techniques than when it comes to companies).

    Just a question for Andreas and Jen here. Andreas, in his Ethics paper, defines procedural rationality as follows: “a person’s decision is procedurally rational in an environment to the extent that, given her particular psychological makeup, the decision-making procedures she uses allow her to reliably achieve her ends in this type of environment.” What are people’s ends exactly here and how do you discover those in circumstances where, what people prefer and choose, depends on the choice environment (just think of cases of framing)? How broadly do you interpret those ends? I might take ‘leading a long and healthy live’ as an end but on a Friday evening, I might also want to go out and binge drink. What are my ends in cases like this?

    I’ve got views of my own here, but just wanted to know you two because you need a proper answer to this question if you want to avoid the ecological rationality becoming empty. It also haunts the ‘means paternalism’ that Sunstein defends: sure, it makes good sense to try and install nudges that help people realize their ends (and not be ends paternalist so not try to nudge people to change those ends), but then again, people ask, what are those ends exactly and what are reliable, non-arbitrary ways of finding out what those are?

  14. Thanks Bart, this is indeed a tricky challenge (and I would be interested to hear what your solution is). I have a short footnote in the paper on something like this (footnote 72). Let me here develop this a bit more, in a very preliminary fashion of course. Sorry for the length of the reply!

    The challenge is to capture people’s ends, particularly in light of the fact that, first, people’s ends differ (Diversity Challenge) and, second, people’s revealed preferences are themselves subject to choice architecture (Preference Shift Challenge), which seems to require some more substantial normative claims about what people’s ‘real ends’ are.

    Note first that nudging policies, even though they are often described as libertarian paternalism, are often not paternalistic. Some nudges, like environmental nudges or organ donation nudges, are not paternalistic but ‘pro social’. I guess the problem is primarily thought to be about the paternalistic nudges, so I will mainly focus on those here (contrary to philosophers, people seem to respond to paternalistic nudges more positively than to pro-social nudges, probably thinking ‘at least I am getting something out of it’)

    With that in mind, consider the Diversity Challenge first. The first important point is that this just isn’t a problem specific to nudging. The diversity of people’s ends affects all public policies, not just nudging alone. When we devise environmental policies or health policies like tobacco control, and we are thinking whether we should have a public health campaign, new regulation, taxes, or whatever, we have to make such judgement calls. So, it’s a pretty deep problem that affects public policy more generally and not just nudging. That might not be a very satisfying response. But it is a complex ethical and practical question what goals should a public health system have. Arguing for a nudging as a plausible tool for public policy does not require solving this deeper normative problem.

    Second, compared with many alternative policy interventions, nudging even has a comparative advantage, because it preserves freedom of choice and allows people to act contrary to a nudge. Accordingly, given the diversity of ends people might have, nudging accommodates diversity of preferences better than more stringent interventions. Moreover, when the diversity is ethically very important and it is desirable that people with different preferences shouldn’t follow the nudge, then one desideratum of good nudges is that they remain sufficiently resistible, which most nudges are (Saghai 2013). For example, if you do not want to be an organ donor, you can opt out. I here also don’t worry too much about worries that people who really want fatty or sugary food or really want to binge drink would be disadvantaged by nudges. It is quite easy to resist a nudge when you crave donuts or already had a few drinks and want to keep drinking (it is far harder to resist such urges). Finally, there are some nudges, although they are still relatively few, that can be personalised, which should further help accommodate the diversity of ends and preferences.

    Third, like with other good policies, we would not want to impose highly controversial ends top-down and in democratically uncontrolled ways. In the paper, I argue that like other policies, nudges should be democratically controlled and transparent. Functioning democratic systems should help filter out really controversial nudges whose ends either conflict with widely held reasonable preferences or with other important values (for example, nudges that violate bipartisan democratic norms). But this is not a trivial problem.

    Finally, note how diversity cuts both ways: leaving a choice architecture as is will work for some people but not for the many people with different ends. So, the objection that nudging will disadvantage people with different ends neglects how the counterfactual can disadvantage others. For good nudges, the status quo will disadvantage more people than the nudge (which, again, one can also opt out of). For example, most people would like more privacy yet the privacy defaults on Google, Facebook etc. don’t reflect this widespread preference. Requiring a more stringent privacy default would disadvantage those few who want less privacy but the counterfactual does far less well in accounting for most people’s preferences.

    Consider the Preference Shift Argument next. This problem presents a challenge for my argument that nudges can enhance rationality, because we now need to make a judgement call about what people’s ends are even if their revealed preferences are not stable across changes in choice architecture. I basically dodged that question to some extent by going with ‘ends’. I am also hesitant to impose strong ‘rationality constraints’ on what people’s ends are or even should be (but I haven’t worked this out yet). But I think there are still different ways to try to spell out why some ends are more important from the agent’s own perspective than others and some such normative criteria might be suitable for public policy in diverse societies. Typically, ends go beyond just one individual decision and beyond a highly specific preference. People have ends that are broader and more far-reaching than preferences like, at 12:30 today I happen to feel like eating fries. But how do we identify such broader ends? One could go with something like Frankfurt’s coherence between desires, build in some deliberative component (like Williams), see which one of someone’s ends just improves their wellbeing or life satisfaction more (that’s a bit less subjectivist), which ones are more central to people’s conception of the good (Rawls). Alternatively, one can simply ask people what they hope to achieve across a number of decisions and what matters to them (before and maybe after). Pragmatically speaking, in many nudge policies, it is often relatively clear what people’s ends are and clear that people care about them, like when they try to lose weight, try not to be poor when they are old, quit smoking so as to prevent dying and so on. And those ends typically survive the different subjectivist tests mentioned above. And, again, in cases where people genuinely don’t care about those and always prioritise more short-termist desires, nudge policies are much more flexible in allowing for those than other, more stringent policies. [A somewhat different, but pretty good route, is to say that some paternalistic policies improve people’s freedom of choice across time, like when they prevent earlier deaths from smoking cigarettes. In light of preference change across time – a robust empirical phenomenon – people should rationally prefer more lifetime freedom, which can then give you a justification for paternalism, particularly when it promotes freedom-enhancing all-purpose goods, that is somewhat independent of people’s momentary preferences (an argument I develop in (Schmidt 2017).]
    Now, how does that relate to the Preference Shift Argument? I think, typically, the argument is that preference shift in light of choice architecture is precisely an argument for giving much less weight to revealed preferences: typically the revealed preference perspective is thus neutral between the two situations and one cannot really prioritise one over the other. When that is the case, one then has an easier time, normatively speaking, to move onto the deeper ends people might have.

    Independently, I think ethical theories built on revealed preferences are pretty unpopular these days, so the alternative is not more appealing. Sugden has recently had another go at defending something like it. He argues against the above move and argues that in a market economy such revealed preferences are important because markets respond to what people want and what people want here can matter even if it is not coherent across time (the argument on the latter point is quite complex (Sugden 2018)). And in some situations it seems fine to have a preference for something that includes a preference for how it is being presented. Like when the cake in the shop is presented in an appealing way, then that might be a plausible part of the whole experience and shouldn’t be dismissed as irrelevant (Sugden 2008). I think while such an argument works somewhat in this example – and nudging can allow for such cases – I find it hard to see how it should work for most standard nudge cases. Does it significantly improve my lunch experience when the donut was presented first? Am I having a better time buying something because of price framing effects? And so on. Moreover, as I argue in the paper, drawing on Shiller and Akerlof, there just are so many instances of ‘negative nudges’ in markets that really counteract people’s ends and that really don’t enhance people’s consumption experience (Akerlof and Shiller 2015).

    References
    Akerlof, George A., and Robert J. Shiller. 2015. Phishing for Phools: The Economics of Manipulation and Deception. Princeton: Princeton University Press
    Saghai, Yashar. 2013. “Salvaging the Concept of Nudge.” Journal of Medical Ethics 39 (8): 487–93. https://doi.org/10.1136/medethics-2012-100727.
    Schmidt, Andreas T. 2017. “An Unresolved Problem: Freedom across Lifetimes.” Philosophical Studies 174 (6): 1413–38. https://doi.org/10.1007/s11098-016-0765-5.
    Sugden, Robert. 2008. “Why Incoherent Preferences Do Not Justify Paternalism.” Constitutional Political Economy 19 (3): 226–48. https://doi.org/10.1007/s10602-008-9043-7.
    ———. 2018. The Community of Advantage: A Behavioural Economist’s Defence of the Market. Oxford University Press.

  15. Thanks, Andreas, I think that is very enlightening. Again, I agree with almost all of your claims. Sure, nudges can legitimately aim to realize all kinds of (policy) goals, not just paternalist ones. And sure, the ‘what are ends’ question permeates policy discussions and provides a challenge for any kind of policy maker considering which tool to use.

    In my mind too, ends are broader, more general and in a sense more fundamental than the narrower, more specific preferences people often form on the fly (which is the source of the Preference Shift Challenge). In addition, I agree that there usually are ways to figure out what people’s ends are. Take nudges that are built into the design of artefacts, like smartphones, websites, doors, …: we know what people want to achieve with these things (make calls, get information, get into and out of rooms) so their design should be geared at facilitating them reaching those ends. And yes, nudges actually compare favorably to other policy tools in accommodating for people whose ends are not what policy makers think they are (or take as legitimate policy goals).

    I also don’t have a worked out answer to the ‘what are ends’ question but I agree that it plausibly relies on ideas from Frankfurt (higher order preferences) and Rawls (conceptions of the good). One could think of this also in terms of people’s ‘life plans’ or ‘the things they care about’. These things are, again, more abstract and more fundamental. They can come into conflict: my life plan for a long and healthy life conflicts with my life plan to enjoy life’s more basic pleasures, such as (Belgian) beers.

    According to me, the broader character of ends and the easy resistibility of nudges gives (paternalist) policy makers quite some slack. Why not go for health-promoting nudges if we can 1) safely assume that most people do not want die an early death and 2) those that really do favor a ‘live hard, die young’ attitude are still able to act on this end?

    Of course, this slack is also given to companies who want to steer people to those kinds of choices that maximize their profits while still falling within people’s broad ends. I would bite this bullet: beer companies can nudge me towards beer consumption without violating my ability to set and achieve my ends. Of course, this makes the (ethical and democratic) question which ends are those that policy makers should promote all the more important. While both health-promoting and beer-consumption-promoting policies are justifiable on the basis of (a judgement call of what) people’s ends (are), there are good, independent reasons why policy makers should prioritize the first over the latter.

    Do you also feel that we are in broad agreement here?

    Final remark, Andreas. I think your last comment (and mine as well), about conceptualizing and discovering people’s ends and how nudges can plausibly and legitimate promote them, can be formulated without endorsing an ecological conception of rationality, right? I was asking you, because ‘ends’ are in your definition of what is ‘ecologically rational’, which implies that you can only claim that nudges promote ecological rationality if you have an answer to the ‘what are ends’ question. I look forward to a more thorough development of the tentative answer you gave here in a new paper (Schmidtz 2020) in, why not, Ethics? 🙂

  16. Hey Bart,

    I think I am in broad agreement here. I would separate the question of what ends should ultimately underlie politics, public policy and so on at the deepest philosophical level – just for the record, I am here attracted to welfarism – and what ends we can identify in real-life contexts in diverse societies with people having different conceptions of the good and so on. For the latter, many of the ideas in your comment (and unsurprisingly mine above) seem right to me.

    I kind of dodged the question of what ‘ends’ means in the characterisation of ecological rationality. I think it’s a very complex question of whether rationality is only ever instrumental such that we cannot say whether any ends people have can ever be irrational in virtue of their content (roughly the ‘Humean’ view) or whether rationality also makes certain ends themselves irrational (even beyond formal criteria such as consistency or transitivity). I lean towards the former which I roughly assume for the purposes of the paper. To then make the rationality argument – i.e. that some nudges can promote rationality – we can use some of the things we have said above to identify the ends relative to which nudges promote ecological rationality.

    I think one of the things we can extract from our exchange is that the nudge approach does not solve hard questions of what ends public policy should have. Saying nudging is only about means paternalism is too quick and in a way unhelpfully merges the question of whether we are justified in using nudge techniques – and how we should do so – with the question of what ends should be pursued through nudging. Another thing to extract, however, is that while nudging in no way solves the problem of what ends public policy should pursue, it somewhat ameliorates it, because nudging gives more leeway for people’s different preferences. Accordingly, in some domains and for some ends, nudging is more easily justifiable than other policy interventions.

    A comment on pro-social nudges. Most of the argument I make focuses on ends that people primarily have about themselves, like eat healthily, save for retirement and so on. But the argument that nudges can promote rationality also works for many pro-social nudges, as long as there is enough support for those. The link with rationality becomes particularly strong in cases in which outcomes require collective action. For example, if only I do x, but no one else does, then I won’t bring about good G. Only if enough others do x will we bring about G. Now, some people would do x out of principle even if not enough others do so too. Other people would do x but only if enough others do so too. Now, a nudge towards x might not only make one more likely to do x, one might also have reason to believe enough others will also do x or one might even read into the nudge that there is a social norm that one should do x – which people often read into defaults. Accordingly, broadly following the nudge can be in line with the collective and individual ends we have (both for the lone principlist and the conditional cooperator). So, pro-social nudges can promote rationality too. If one wanted to take this into more philosophical terrain, one could make the argument that this not only improves individual but also collective rationality. This seems plausible to me but making this claim would require spelling out collective rationality and its role in public policy plus it would require engaging in some abstract debates about collective agency and such.

    PS: some Belgian beers are now available as 0.0 in the Netherlands, so I am enjoying them too 🙂

  17. Thanks to all of you for a great discussion. There are two cases that keep on bothering me, assuming that I buy the model that the nudge mechanism appeals to ecological rationality. The concerns are related to some of the points raised about end-rationality and about providing relevant information. In any case, I’d be curious about your thoughts.

    1. What if the nudge mechanism brings about a socially desirable outcome and people appreciate the shift in behavior ex post, but the shift actually reduces the weight of a particular moral value that was held ex ante? My favorite example is this one: We may nudge the population toward accepting gender-neutral restrooms, but some groups could reasonably object that we thereby lose a kind of modesty that made people resist such a move in the first place. (Bovens and Marcoci 2018) Similar concerns can be raised against opt out policies for post mortem organ donation by religious groups who underwrite respect for the integrity of the corpse, or against various public health initiatives that take some of the fun out of life by subscribers to a ‘live hard – die young’ attitude.

    2. There are many cases in which providing people with relevant information would not steer them toward the desired behavior from a public health perspective. My favorite case is Spiegelhalter’s graph on breast cancer screening. (Bovens 2019, Spiegelhalter 2015) If we lay out the benefits of lives saved and the costs of unnecessary treatments incurred due to screening, then women may no longer opt for screening and thousands of lives would be lost as a consequence. There are ways to present the information differently so that they will continue to opt for breast cancer screening, though this information should be less relevant to their decision-making. Now maybe we want to say that information so presented taps into their ecologically rational decision-making. But I sympathize with Jeremy Waldron’s (2014) concern that not presenting the information that is relevant to decision-making is a violation of people’s dignity.

    In some cases, this is an issue of securing socially desirable outcomes by reducing harm to others. For example, telling people the chances of getting into an accident by driving drunk versus driving sober on their 10 mile stretch home is not going to keep them from drinking. Other (less relevant) stats will be more successful in this regard. This is a case of reducing harm to others. But in the breast cancer case, we have a case that is foremost about reducing harm to self. And it does make me nervous that the behavior which public health nudges like to bring about because it saves thousands of lives is actually not the behavior that fully-informed risk-conscious people would choose.

    References

    Bovens, Luc and Alexandru Marcoci “Gender-neutral restrooms require new (choice) architecture.” Behavioural Public Policy Blog Post (2018)
    URL:https://bppblog.com/2018/04/17/gender-neutral-restrooms-require-new-choice-architecture/

    Bovens, Luc The Ethics of Making Risky Decisions for Others. In: Mark D. White (ed.), Oxford Handbook of Ethics and Economics. Oxford University Press. pp. 446-473 (2019).
    URL: https://philpapers.org/rec/BOVTEO-6
    Spiegelhalter, David “A Visualisation of the Information in NHS Breast Cancer Screening Leaflet.” (2015)
    URL: https://understandinguncertainty.org/visualisation-information-nhs-breast-cancer-screening-leaflet.

    Waldron, Jeremy “It’s all for your own good.” New York Review of Books, 9 October 2014
    URL: https://www.nybooks.com/articles/2014/10/09/cass-sunstein-its-all-your-own-good/

  18. Thanks Luc for this comment.
    I am not an expert on the gender-neutral toilet debate. My uninteresting answer is that it depends on what values would be improved, what values would potentially change, and whether desirable pluralism and respect for different reasonable conceptions of the good is ensured. My general impression on the bathroom case is that whether we should make the changes that you propose, or how exactly we should such changes, should be done after careful deliberation and invoking people’s (reasonable) needs, preferences and values. Intuitively, this suggests to me that it is not unreasonable that, alongside gender-neutral options, more intimate, single-stall options or non-gender-neutral options should be provided too (as you suggest). Even though I am not always a fan of the value ‘modesty’ (it can come with some baggage regarding gender equality), it is something that should typically be respected in a pluralist society with different conceptions of the good, needs and preferences etc. So, if nudges cause norm changes, we should be mindful of whether such changes are such that they still come with enough respect for differing reasonable conceptions of the good. I would hope that more democratic and transparent processes might improve the chances that respect, tolerance and pluralism are respected. I here don’t worry too much about the organ donation case. It can be enacted very transparently and it can be made obvious that it is a personal choice and that society respects one might have reasonable motives for opting out. The Netherlands recently introduced an opt-out. It remains to be seen, but I don’t think it will crowd out ‘respect for the integrity of the corpse’. For example, it only passed the second chamber with a slim majority and the official communication doesn’t make the opt-out option seem wrong in any way (as far as I can tell).

    Also one could also make the ‘tu quoque’ move again. Lots of public policy leads to norm changes and for those, including nudges, we always have to have a difficult debate on whether that is desirable or not. (Lessig 1995) has a good article (more like a book) on how regulation and law can change social meaning. Sometimes of course norm changes can make policies desirable, but it is something to discuss on a case by case basis.

  19. This from Andreas Schmidt:

    Luc, I probably have something more interesting to say on the breast screening case. The first theoretical point is that ecological rationality does not per se favour external nudging over information provision, often quite the opposite. What it definitely prefers is correct information that actually helps people understand their medical decisions. To this end, we should empirically investigate what kind of information provision actually helped people understand health information. As I say in the paper, ecological rationality favours improving rational agency and effective information provision can be an important part of that. Moreover, medical decisions are not just about mortality risk, morbidity risk but also quality of life reductions are important factors and that can only be factored in through good patient decisions supported by helpful presentation of the data. So, as an overall practice, it seems a good idea to have effective information provision to strengthen patient autonomy – even if it leads to some suboptimal decisions – rather than try to mislead or withhold information. Moreover, existing actors in healthcare, particularly commercial actors, have their own interests which need not perfectly overlap with overall public health, which can lead to the ill-matched nudges I talk about in my paper. An ecological rationality perspective would require a more rigorous examination whether existing health provision actually helps patients make good decisions.

    More interestingly, in the case you present, my framework would in fact strongly support more and better information provision and thus definitely alleviate your worries. Gigerenzer has a good chapter on breast cancer screening and similar issues (Gigerenzer 2014, chaps. 9; 10). Let me summarise briefly what he says (I haven’t checked his data, so don’t quote me on that). Contrary to what you say, it doesn’t seem that from a public health perspective it is (obviously) better if women go for breast cancer screening. Let us first assume that breast cancer screening does reduce breast cancer mortality rates. The number I have seen is that with regular breast cancer screening rather than 5 out of 1,000 women above 50, 4 out of 1,000 then die from breast cancer (within a ten-year period). This is sometimes sold, very misleadingly, as a ’20 percent risk reduction’ although the absolute risk reduction is only one in one thousand. Plus, according to the numbers in the Gigerenzer book, 100 out of 1,000 women who do regular breast cancer screening experience either false alarms, biopsies or psychological distress as a result and 5 out of 1,000 had unnecessary treatments such as mastectomies. In light of these very real potential costs and the small reduction in absolute risk, it can be entirely rational to decide that these numbers speak against regular breast cancer screening. So, I would disagree with your judgement that there is a case for withholding information so as to ‘nudge’ women into breast cancer screening.

    What is more, it seems that if you look at the overall cancer mortality rate (rather than just breast cancer) between those who had breast cancer screening and those who did not, they are the same, (21 out of 1,000) (probably because cancer patients often have more than one type of cancer and it can be unclear which one was the cause of death). So, if the numbers that Gigerenzer presents are right, there is no evidence that breast cancer screening reduces cancer mortality.

    Gigerenzer, Gerd. 2014. Risk Savvy: How to Make Good Decisions. New York: Penguin.

    Lessig, Lawrence. 1995. “The Regulation of Social Meaning.” The University of Chicago Law Review 62 (3): 943–1045. https://doi.org/10.2307/1600054.

Leave a Reply

Your email address will not be published. Required fields are marked *