Brian Jabarian has graciously provided a critical précis, which includes an overview of Rowe & Voorhoeve’s view and arguments, an outline of their central cases, and some points for discussion. The overview appears below. As the second and third parts rely rather heavily on careful formatting, the entire précis is attached here as a PDF. The authors have also provided a set of relevant tables.

Please join in the discussion!

*  *  *

The Moral & Rational Costs of Uncertainty Aversion
Brian Jabarian

Pluralist egalitarianism holds that

[one] should improve people’s prospects for well-being, raise total wellbeing, and reduce inequality in both people’s prospects and in their final well-being (how well their lives end up going) (Rowe & Voorhoeve, 2018, 243-244)

While one can find a comprehensive defence of pluralist egalitarianism elsewhere, the authors extend here this theory from cases under risk to cases under uncertainty. They build this extension by showing that it is a morally and rationally permissible distributive theory of justice under uncertainty.

Before discussing the challenges this extended theory faces, it is worth providing some background. Pluralist egalitarianism is built on two kinds of principles: moral and rational principles. The former principles are egalitarian and sketched in the definition above. The latter ones, however, need some elaboration. For cases under risk, these rational principles are exclusively derived from standard decision theory. One cornerstone orthodox principle under risk relevant here is that the decision-maker is capable of making up their mind to form (or has access to) precise probabilities regarding the possible states of nature. For cases under uncertainty, the decision-maker loses this ability. Depending on the degree of severity of uncertainty, one ends up at first best, with a reasonable range of probabilities (cases of “moderate uncertainty”) and at second best, with extreme intervals of probabilities (cases of “severe uncertainty”). At worst, it could result with no probability at all (cases of “maximal uncertainty”). In these uncertain cases, an additional principle is required to guarantee that a decision is rational.

Several candidates are in the running to meet this rationality requirement in our context of distributive justice. The challenge is then to choose one that fulfils the two following conditions successfully. Firstly, the new principle has to be not only rationally permissible under uncertainty but also morally permissible (from a pluralistic egalitarian perspective). Secondly, this principle has to be flexible enough to accommodate a broader and dynamic framework of distributive justice to allow differential and (morally and rationally) permissible attitudes towards risk and uncertainty.

The new principle Rowe and Voorhoeve rely on is the uncertainty aversion principle. This latter holds that when choosing between a risky prospect and an uncertain prospect, one opts for the risky prospect. Let’s consider whether this principle is morally and rationally permissible. When considering its rational permissibility, one has to consider two dimensions: descriptive and normative. We can attest that the uncertain aversion principle is descriptively accurate since the empirical results show that this principle describes adequately how subjects behave uncertainty.

However, the normative rational permissibility of this uncertainty aversion attitude is still contested in the philosophical and economic theory literature. Namely, the debate has not yet reached a consensus whether a rational agent may permissibly display an uncertainty averse attitude. Rowe and Voorhoeve do not engage in this controversy. They instead adopt an assumption defended by some leading decision theorists according to which it is rationally permissible to display an uncertainty averse attitude, though it is not rationally required. If this debate has focused in length on the normative rational permissibility of this attitude, much less has been said about its moral permissibility. Rowe and Voorhoeve’s paper is important because it fills this gap. And, it is original in developing a specific egalitarian interpretation of the uncertainty aversion.

To justify the moral permissibility of uncertainty aversion, the authors proceed as follows. They first propose a different meaning for “equality” than the standard one. Usually, equality is understood in terms of the outcome’s values: two situations are equal if and only if individuals end up equally well off. One could extend this claim under uncertainty such that the only morally relevant information to define “equality” is still the outcome’s values. However, according to the authors, this definition of equality, pertaining exclusively to value of final well-being, would rule out crucial moral information to design a fair system of distributive justice. Accordingly, one should incorporate the experience of uncertainty itself in the definition of equality under uncertainty. This integration can been seen as a moral benefit or a cost in the system of distributive justice.

For Rowe and Voorhoeve, facing uncertainty is a “burden” (op. cit. p. 242) in the sense of depressing the value of an individual’s prospects. Therefore, it should correspond to a moral cost. Let us see why in the following situation. Suppose Ann will go wholly blind unless she is treated. As her doctor, you have two alternative treatments. The first treatment is well-known to all and risky. It has a 50% chance of curing her and 50% chance of having no effect on her. Since you have implemented it in the past, and so, have access to a small distribution of success and failure, you have certain prior beliefs in these current objective estimations of success and failure. The second treatment is entirely new and maximally uncertain. It leads to a full cure or no cure at all with no objective estimation of failure and success accessible. Since it is very new, you are not familiar enough to counter the absence of objective estimations by forming precise prior beliefs about its effectiveness. Despite leading to the same two possible levels of final well-being as the risky treatment, we can say that, in prospect, the uncertain treatment bears a moral cost, which, granting uncertainty aversion, would be morally impermissible to choose to incur on Ann’s behalf, for anyone concerned by her welfare.

In sum, pluralistic, uncertainty-averse egalitarianism favours alternatives for which more fine-grained probabilistic information related to the states of nature is available. Besides, this view considers uncertainty as important moral information to rely on to take a fair distributive justice based decision and should count as a moral cost (in the sense of depressing the value of individuals’ prospects) in the system of distributive justice.

4 Replies to “Thomas Rowe and Alex Voorhoeve: “Egalitarianism under Severe Uncertainty”. Précis by Brian Jabarian

  1. Tom and I are grateful to Brian for his careful engagement with our proposed form of uncertainty-averse, pluralistic egalitarianism. His post consists of (1) a summary, (2) questions and (3) criticism (best read in the full pdf linked to above) each raise important issues, so we’ve decided to offer responses not ‘in one go’, but rather in a series of posts. Also, since Tom and I are not able to construct these answers side-by-side and fully coordinating our posts with each other would slow us down too much, we’ll each post separately. Let’s see if he and I agree!

    Here I will address Brian’s summary, to further clarify our project. A severely uncertain prospect (sometimes called ‘an ambiguous prospect’) is one in which a decision-maker is not in a position to non-arbitrarily assign precise probabilities to the prospect’s possible outcomes. Cases in which decision-makers must evaluate such prospects abound. One example we open the paper with is the UK government facing the choice whether or not to purchase huge quantities of a new vaccine against the H1N1 ‘Swine Flu’ in 2009, when the experts in the Department of Health could not supply the decision-makers with precise probabilities for the possible degrees of severity of the disease. Another example is climate change. For key propositions, such as that in a ‘medium future emissions’ scenario, the Earth will warm by more than 2.0 degrees centigrade, the most authoritative report available, by the Intergovernmental Panel on Climate Change (IPCC) purposely does not report precise probabilities, but rather ranges of probabilities. (To illustrate: It judges that in one such medium-emission scenario, “warming is *likely* to exceed 2.0 degrees”, where this means: “has a probability of between 66% and 100%”. see p. 10 on https://www.ipcc.ch/site/assets/uploads/2018/05/SYR_AR5_FINAL_full_wcover.pdf). A smaller-scale example involves a doctor considering whether to prescribe a patient with multiple sclerosis (MS) a very novel medicine, for which evidence regarding the probability of its efficacy in their patient is extremely limited. The discussion of such cases in the literature on distributive justice is strikingly limited. Brian correctly characterizes our project as one of proposing a form of egalitarianism which offers both morally and rationally permissible responses to such cases.

    We start with a form of pluralist egalitarianism which cares about four things: (a) reducing inequalities in the value of people’s prospects; (b) reducing inequality in how people end up, or as we call it, their final well-being; (c) improving the value of people’s prospects; and (d) improving how they end up. Here, we want to register one point of disagreement with Brian’s characterization of our view, which he calls ‘non-standard’. In fact, however, it seems to us one of the most popular forms of egalitarianism. Because it cares about unequal outcomes and unequal chances, it explains why, say, if you have an indivisible resource that you must distribute to only one of two equally needy people to whom the resource will do as much good, you should flip a coin, because this generates equal prospects even if it does not generate equal final well-being (which would be even better, but which is unattainable in this scenario).

    What *is* non-standard in our paper, as Brian points out, is that we explore what would happen if we combined this view with “uncertainty aversion”, which is basically that a decision-maker strictly prefers, keeping other things equal, a prospect which is such that they are in a position to non-arbitrarily assign precise probabilities to its possible outcomes (which, following the terms used in the literature, we will call a ‘risky prospect’) to one for which they are not in this position (an uncertain prospect). To continue on the example of the doctor considering whether to prescribe their patient an existing, well-known drug for MS or a wholly novel drug: suppose that for both drugs, there are only two outcomes available: no impact (life-time well-being of 50) and full cure (80), and that only one of the two drugs can be prescribed. The familiar, risky drug, in the doctor’s estimation, gives the patient a 50-50 chance over these outcomes, whereas for the novel drug the doctor believes it gives their patient a chance of a full cure of between 25% and 75%. Then if the doctor is uncertainty averse, they will favour the well-known, risky treatment on their patient’s behalf.

    Now, as it turns out, many doctors are indeed uncertainty averse in this sense on their patients’ behalf. (See Gustavo Saposnik, Angel Sempere, Daniel Prefasi, Daniel Selchen, Christian Ruff, Jorge Maurino, and Philippe Tobler, “Decision-making in Multiple Sclerosis: The Role of Aversion to Ambiguity for Therapeutic Inertia among Neurologists,” Frontiers in Neurology 8 (2017), article 65.) Indeed, empirical studies that we cite strongly suggest that many people are uncertainty averse on their own behalf. But of course, for our project to be interesting, it is not enough to say that decision-makers are often uncertainty averse; it must be the case that such an attitude is morally permissible, or reasonable.

    There is much debate on this issue among decision theorists and philosophers. The existence of this debate alone makes it worthwhile to explore what would follow if uncertainty aversion were permissible. But while we do not enter into the debate in detail, contrary to what Brian seems to suggest in his precis, we do more than merely assume that it is permissible. In a nutshell, the argument we offer for its permissibility is this (it is not an original argument, but follows work by Jim Joyce and Izhak Gilboa, among others).

    Step 1: When faced with situations of the kind outlined–the H1N1 virus, climate sensitivity, a wholly novel drug–rationality does not require one to “go beyond one’s evidence” and assign, arbitrarily, precise probabilities to the outcomes of each prospect that one might choose. Instead, it permits one simply to represent one’s beliefs in terms of ranges of probabilities assigned to each possible outcome, as, say, the IPCC does, or as the doctor does in our simple example.

    Step 2: When a decision-maker has such imprecise probabilities, they cannot compute a single expected value for a prospect. But they can compute a range of such expected values, corresponding to different assignments of probabilities over outcomes that are compatible with the information they have. For example, assuming that more warming is worse, the *worst* expected value of the “medium emissions” prospect mapped out by the IPCC will be one in which there is a 100% chance that it leads to >2.0 degree warming; the *best* expected value of this prospect is that there is only a 66% chance that it leads to such warming. All the IPCC conclusions allow one to say is that the expected value of this medium emissions prospect is in the range given by these values. Or to take our MS example: the doctor can say that the novel medicine has an expected value in the range of 57.5 (0.75*50 + 0.25*80) to 72.5 (0.25*50+0.75*80). The crucial normative claim is this: In the face of this range of expected values, it is permissible (we do not claim it is required!) to be cautious, in the sense that in making an overall assessment of the uncertain prospect’s value, one may permissibly give more decision weight to the less good expected values than the better expected values. To apply it to our examples: when assessing how bad a policy of “medium emissions” would be, one is permitted to give more decision weight to the possibility that this would certainly lead to more than 2.0 degrees warming than to the possibility that this would only have a 66% chance of leading to such warming. And the doctor can permissibly take the prospective value of the novel medicien to be less than the mid-point between 57.5 and 72.5. (So less than 65, the expected value of the well-known medicine.)

    Now, we will offer more by way of defence of the permissibility of uncertainty aversion later in response to Brian’s challenging case. But the basic ideas are simple and attractive: No requirement to go beyond the evidence; a permission to be cautious.

    So our project is to explore what happens when one combines *a* permissible attitude to uncertainty with plurastic egalitarianism. As Brian correctly observes, the principal impact of uncertainty in our framework is through its depressing effect on the value of prospects. We emphasise that this takes two distinct forms. First, uncertainty depresses the value of a prospect for an individual (as in the MS case). Second, it depresses the value of a social prospect (as in the climate change case). But one final clarification is in order before, in a next post, we turn to his questions and criticism. Brian sometimes writes that one our view, the “experience” of uncertainty creates this disvalue. This may suggest that it is an experience of anxiety or some other aversive state of mind that we are concerned with. This is not the case. The patient for whom the doctor is considering the novel treatment may never have any knowledge of the uncertainty surrounding it (they may be unconscious, say, during the decision-making process and until the uncertainty is resolved), the people on whose behalf decisions regarding climate policies are taken may equally be unaware of the uncertainty (they may not even exist yet). Our view is that in such cases, it is permissible to be averse to uncertainty when evaluating it from the perspective of their interests and from the perspective of the value of the social states that we may bring about. It is the uncertainty itself which one can regard as a disvalue, not merely the experience of it.

  2. This post, co-written with Tom, continues our response to Brian, now focusing on the questions for discussion that he formulates. Here, we reply to his questions (1) and (2), we continue later with questions (3) and (4). We have paraphrased Brian’s questions below to make clear how we interpreted them.

    1. Is it reasonable to display uncertainty aversion for societal decisions even when both the value of each individual’s prospects and the value of population-level (social) prospects are very high?

    In essence, we take this question to be about whether one’s degree of uncertainty aversion should vary with the quality of people’s prospects, and in particular, that aversion should disappear when these prospects are good.

    While the particular formula we use for illustrative purposes (the Hurwicz-Arrow criterion for decision-aking under uncertainty) assumes a fixed degree of uncertainty aversion for all decisions, we are open to the idea that different contexts might make different degrees of uncertainty aversion reasonable. But it seems to us that the suggestion that uncertainty aversion should disappear when prospects are of high value is mistaken. For it seems reasonable to display uncertainty aversion when the value of individual-level and social-level prospects are high. For example, in our H1N1 influenza virus case, the typical person living in the UK in 2009 had (by global and historical standards) excellent prospects. And, by these standards, the typical person’s prospects in the UK would remain good even if we were to assign a high chance to the Government’s “reasonable” worst-case scenario, which (in the absence of a mass immunisation campaign) involved 65,000 deaths due to H1N1 (roughly 1% of the population). But this seems a case in which uncertainty aversion on each individual’s behalf is warranted. Of course, given that there is an unknown chance that 1% of the population will die and that this will generate a lot of inequality in lifetime well-being, this virus may be seen as substantially depressing the value of the social prospects that also concern the government. But even this was unclear; for if one assigned, say, an unknown probability in a range of 1%-10% of this worst-case scenario coming to pass, then the effect on the current value of the “prospects of the British population” would be modest–these prospects would still be high by historical and global standards. But aversion to uncertainty would be sensible.

    Furthermore, the typical person in lab studies on uncertainty attitudes (a student in a developed country with high expected lifetime earnings) makes decisions between prospects that each have minimal impact on their lifetime wealth. But they still often display uncertainty aversion.

    2. For any distributive theory of justice under severe uncertainty, should we consider the experience of uncertainty as an impermissible moral cost and avoid it all expense, even if this would violate rational principles?

    As mentioned in our reply to Brian’s summary, it is worth clarifying that the account of severe uncertainty as a “burden” is not a reference to the experience of the individual who is exposed to the uncertainty. The burden depresses the value of an individual’s prospects even if the individual has no knowledge of the uncertainty they face.

    A second point of clarification is this. Our view emphatically does NOT hold that one ought to avoid an uncertain prospect in favour of a risky or a certain prospect at all cost. It merely holds that it is permissible to incur some cost (in terms of the quality of achievable outcomes) in order to avoid uncertainty. But as this cost increases, there comes a point at which one ought to choose the uncertain prospect. For example, consider our doctor prescribing medication in the MS case. If the novel medicine’s “worst possible outcome” was 50 + e (with e positive), instead of 50, while the familiar medicine had a downside of merely 50, then while for a small e, it would be permissible to choose the familiar medicine out of uncertainty aversion, for a larger e, it would, in our view, become obligatory to choose the novel medicine.

  3. Here is our reply to Brian’s questions 3 and 4, paraphrased below.

    3. When a social planner has no information regarding citizens’ attitudes towards uncertainty, which option should she take between a severely uncertain option with high pay-offs and a slightly uncertain option with lower pay-offs? (Or between an uncertain prospect with high pay-offs and a risky option with lower pay-offs?)

    4. When a social planner does have information regarding citizens’ attitudes towards uncertainty, is it permissible for her to ignore this information and always follow the recommendation made by an uncertainty-averse distributive theory of justice?

    In relation to Q3: We are merely concerned with articulating a reasonable way to evaluate prospects. We therefore don’t say what a decision-maker *should* do, only that some degree of uncertainty aversion is permissible. So all our view implies is that it is permissible to incur some cost in terms of lower possible levels of final well-being (at an individual level) or worse possible distributions of final well-being (at a social level) in order to lessen the extent of uncertainty. But since we do not specify a permissible range of degrees of uncertainty aversion, our view does not aim to answer Q3 to a degree of precision.

    Brian’s questions suggests a further interesting issue, however, which is about what the relationship should be between the decision-maker’s degree of uncertainty aversion and the individuals’ uncertainty attitudes. We think the answer depends on context in the following ways.

    Case A. The decision-maker is an agent of the individual(s) affected or otherwise charged with acting on their behalf, the individual(s) affected are adults, but the decision-maker does not know and cannot find out their degree of uncertainty aversion.

    In this case, it seems reasonable for the decision-maker to use an uncertainty attitude that would offer a best approximation of the attitudes of the individuals on whose behalf they are deciding. Empirical data suggests that the mean attitude is moderate uncertainty aversion (see Stefan Trautmann and Gijs van de Kuilen, “Ambiguity Attitudes,” in The Wiley Blackwell Handbook of Judgment and Decision Making, ed. Gideon Keren and George Wu (Chichester: Wiley, 2015), pp. 89–116, table 1). This would therefore make it reasonable to employ a moderate (but not very strong) degree of ambiguity aversion in such cases.

    Case B. The decision-maker should act to promote the interests of the affected individuals, who are children (or otherwise lacking in possessing authoritative uncertainty attitudes that one should try to mimic or defer to).

    This is in fact the situation in all of the cases in our paper involving prospects for Ann and Bea (see p. 244). We assumed this situation precisely to set aside questions of an obligation to defer to the affected individuals’ uncertainty attitudes. We think the absence of such an obligation provides space for the decision-maker to exercise their judgment about what is best for the affected individuals, and that it is permissible if they exercise that judgment by taking a cautious, uncertainty-averse perspective.

    Case C. The decision-maker is not an agent of the affected individuals, is using a resource that the decision-maker has substantial discretion to use, and the decision-maker knows the uncertainty attitudes of the individuals affected.

    An example of this kind might be a private charitable donation or beneficent gift, where the giver has rightful control over the resource. In this case, we do not think that it is required for the benefactor to defer to the uncertainty attitudes of the affected individuals. For the benefactor to use their own uncertainty attitudes (so long as they are within a reasonable range) does not violate the autonomy of any of the individuals concerned or fail to show these attitudes proper respect. For example, suppose I know that you are uncertainty-loving, so would prefer, as a birthday present, a possibility of winning on a maximally uncertain draw. Instead, I give you a lottery ticket with a known probability of winning, because I am uncertainty averse when evaluating prospects on other people’s behalf and by my method of evaluating these prospects, the merely risky ticket is more valuable. This seems perfectly permissible.

    Case D. The decision-maker is an agent of the individual(s) affected or otherwise charged with acting on their behalf, the individual(s) affected are adults, and the decision-maker knows or can make informed estimates of their degree of uncertainty aversion.

    This is perhaps closest to our opening N1H1 case, in which the decision-maker is an elected representative using resources that the citizens have contributed to the government through taxation. Here, we think the decision-maker should respect the uncertainty attitudes of the citizenry.
    How precisely this would work would depend on the composition of attitudes in the citizenry. In a hypothetical society where every individual is uncertainty-neutral, proper deference to their attitudes would require the decision-maker to be uncertainty neutral on their behalf as well. So then uncertainty aversion on their behalf would likely not be permissible.

    The situation is more complex in a more realistic scenario in which a decision-maker must use a single attitude towards uncertainty for large-scale social decisions for populations with diverse uncertainty attitudes. As suggested above in Case A, the decision-maker could use various techniques to arrive at reasonable compromise attitude, such as taking the mean attitude or minimizing some other aggregate distance measure. As mentioned, empirical data suggests that in reality, moderate uncertainty aversion seems a decent candidate for such a compromise attitude. This would therefore make it reasonable (and perhaps out of deference even required) to employ a moderate degree of ambiguity aversion in such cases.

    In sum, in realistic versions of each of these diverse cases, given the data available about people’s actual attitudes, it seems permissible for the decision-maker to be moderately ambiguity averse.

  4. The use of the medical examples reminds me of the current norm that the doctor should suggest the patient enrol in an appropriate trial with the goal of reducing such uncertainty for future decision making – the other uncertainty aversion. More generally, offering an appropriately biased lottery between definitely risky and uncertain would balance welfare of future others and the currently affected.

Leave a Reply

Your email address will not be published. Required fields are marked *