In his new book, Moral
Value and Human Diversity
, Robert Audi introduces a brand-new ethical
theory called pluralist universalism.
It is not altogether new but rather an original collage of some of the existing
ethical theories. He doesn’t much argue for the view or explain how it is
supposed to work in practice. Anyway, I thought it would be worth introducing here.
I would be interested to hear what everyone makes of it. I’m going to end with
one worry I have.

Audi starts from introducing major ethical theories which he
finds all attractive in their own ways. As a result, he wants to combine three
popular views: virtue theory, Kantianism, and utilitarianism. The combination
of these views, he thinks, can capture three main ethical values which any
plausible philosophical account must find room for. These are:

  1. Happiness (as well-being that consists of the balance of pleasure over pain and suffering).
  2. Justice (understood as a requirement to treat persons equally).
  3. Freedom (of which Audi says surprisingly little).

It’s clear that there is a connection between the ethical
theories Audi wants to combine and these main values he has chosen. Utilitarianism
is centered around happiness as but also defends the impartiality of 2 (and
many utilitarians argue for freedoms too). Kantians often take something like 2
as the core of their view but also advocate the importance of freedom (and
happiness too). And, even virtue theorists think of justice as a virtue and
happiness as ultimate aim resulting from virtue.

In any case, Audi formulates on the basis of the theories and values above the
following moral principle:

(PU1) Optimise happiness so far as possible without
producing injustice or curtailing freedom (including one’s own).

(PU2) And, internalise (PU1) so
that it will be automatically presupposed and strongly motivating in a way that
yields moral virtue.

Because the values of 1-3 are incorporated in (PU1) but also
put into a lexical order, an agent who internalises that principle because of
(PU2) will know what to do when the basic moral values conflict.

It is important to notice that Audi is explicit about the
way in which pluralist universalism places the main moral values in a precise
order of importance. Justice and freedom are more valuable than happiness.
Justice and freedom themselves do not need to be put into an order, according
to Audi, because justice requires maximal amount of freedom that is possible
within the limits of peaceful coexistence. The value of freedom cannot demand
more than that.

And, that’s about it. What do you think? Here’s the main
worry I have. I am wondering about the consequences of giving the maximal
amount of freedom such an absolute ranking in the principle.

The first part of the principle is basic utilitarianism –
optimise happiness. In one sense, if justice is treating all persons equally
then the first part cannot conflict with the justice requirement.
Utilitarianism treats everyone equally because everyone’s happiness counts the
same.

However, one classic worry about utilitarianism is that it
seems to leave no room for the freedom of the agent. Presumably, in any
situation there is just one (or few) ways of acting in ways that optimize
happiness. By utilitarian standards all other options are wrong: options that
one is not free to take. If this is right, then optimizing happiness itself
curtails freedom in all cases – especially one’s own freedom to take whatever
other options seem appealing at the time. Audi is, however, clear that freedom
always trumps the optimizing requirement. And, Audi really means a lot of
freedom – freedom only limited by threats to peaceful coexistence.

Maximal freedom is also supported by the other disjunct of
the second part of the principle which too always outweighs happiness
optimizing. This leads me to think that the first part of the principle,
optimize happiness, never has any bite on the actions of moral agents. Absolute
Freedom, supported by a formal conception of justice, wins at end of the day any
other alleged moral demands. The worry is that Audi is presenting the wolf of
libertarianism in the sheep clothing of Kantians, utilitarians and
Aristotelians.

I also have other worries. Does justice always side with
maximal amount of freedoms compatible with peaceful coexistence or does justice
also require giving people what they deserve even when this limits the freedoms
of others? Is (PU1) principle substantial enough to yield moral virtue?

7 Replies to “Pluralist Universalism

  1. Jussi,
    It doesn’t seem that a moral theory that tells a person what to do in a circumstance therefore limits a person’s freedom in that circumstance. After all, we all know how to disobey moral theories and what’s the theory going to do to stop us. So I don’t think it is fair to say a theory limits freedom just because it gives lots of advice, as most consequentialist theories do.
    But I do think that until you say what freedom is (is it an ability, is it an absence on limits by the actions of others on one’s choices, is it the abridgement of one’s choices regarding every activity or only certain morally important choices, . . .), you haven’t really said much. I myself can’t see how a version of the above principle that doesn’t severely restrict the range of activities which must not be interfered with/possible to do as part of saying what freedom is could be plausible. In other words, I think only a moralized conception of freedom could plausibly be given priority over other values without paying attention to the circumstances of the choice. But I haven’t read the book and there may well be answers to this worry in there.

  2. “The worry is that Audi is presenting the wolf of libertarianism in the sheep clothing of Kantians, utilitarians and Aristotelians.”
    I like this phrasing, and if your summary does justice to Audi’s account, then it really seems that this is exactly what he’s doing.

  3. Mark,
    I think you are right and that must be also what Audi has in mind. When he addresses the issue later, he writes that:
    ‘[A]lthough one may voluntarily devote one’s life to enhancing happiness of humanity [i.e., do optimising], this is not obligatory’.
    I take it that ‘not obligatory’ means here having a freedom not to do so. I am uneasy about this. Just before, he said that the moral principle *requires* optimising happiness. Requiring sounds a lot stronger than advising. I guess I’m uncertain what is meant by such a requirement one is free to flout by the lights of the theory itself. If freedom, on the other hand, just means lack of physical constraints then this is fine.
    Maybe there is a weaker worry. Take a community that maximises mutually compatible freedoms that still give mutual coexistence. It seems like the happiness of that community is an open question. There is some empirical evidence that freedom maximising societies are not the happiest. If by limiting freedoms in some ways the society would be made vastly happier, then utilitarians would say we should do so. Audi seems to think that this is never what we should do. Maybe there exists middle ground.
    I agree on your second point. Maybe some sort of strong positive freedoms could be put to do the required work. From the remarks Audi makes on freedom this does not seem to be what he has in mind – his notion of freedom seems to be rather minimal.

  4. I agree that, absent some peculiar Marxist notion of “freedom”, the instruction to optimize happiness insofar as it does not limit freedom is a non-instruction.
    Plus, I never like Frankenstein theories (i.e. cobbled together from the parts of other, more integrated theories) in philosophy anyway.

  5. This is really more of a question than a criticism, I suppose, as I haven’t read the book. I have the same worries about freedom that Jessi does – the optimization of happiness and the curtailment of freedom, including mine, surely conflict on any reasonable understanding of freedom. Assuming that optimizing happiness will occasionally require giving money that would have gone to my purchase of an iPod, doing so would curtail my ability to by that iPod. It strikes me that moral theories don’t limit freedom (i.e., we can certainly disobey), but promoting happiness surely does, which is what the moral theory requires us to do without curtailing freedom.
    Anyway, leave this aside. PU1 looks like a claim about outcomes – how we ought to judge whether or not one’s actions are right. In other words, in consequentialist lingo, it sounds like a “criterion of rightness” – or at least we could charitably treat it as such. But PU2 looks like it tells the moral agent to treat PU1 as a “decision procedure”. But if this is right, familiar worries about treating the optimization of happiness as a decision procedure creep up. Turning people into utility-calculation machines, and all that. Of course, Audi might respond to this worry by insisting on the “don’t curtail freedom or justice” clause. But I’m not sure this response works. Given the lexical priority of not curtailing freedom or justice, it might seem to turn regular agents into “anti-freedom-curtailment” calculation machines. I’m not sure that’s any better. There is probably a two-sentence eloquent response to this worry in the book.

  6. Dale,
    I had similar problems that would follow if we internalised PU1 as our decision procedure as suggested by PU2. But, you are also right that Audi has a two sentence reply. The eloquence of these sentences I’ll leave for you to judge. Here we go:
    “But suppose we understand our most general principles – whether double-barreled, as Kant’s intrinsic end formula is, or triple-barreled, as the suggested pluralist universalism is – in the light the commonsensical principles Ross articulated, which are supported by all the major ethical standards (in part for that reason) will be the concrete standards I will most often treat as a starting point for ethical reflection. If Rossian principles are taken to as a major starting point in ethics, it is often quite clear that we ought to do”.
    phew. As far as I understand the first sentence, the idea is that PU1 is coextensive and justifies Ross’s prima facie duties. Because of this, if one internalises Rossian duties one also inter alia internalises the more fundamental PU1. And, then the Rossian duties are supposed to guide the action to do what PU1 requires without the costs of reflecting using it. The obvious worry is that, as PU1 gives strict lexical priority to freedom maximising, it’s not at all clear how the Rossian duties follow from it. They do not seem to be freedom maximising in the same way.

  7. There seems to be something suspicious about calling the view “pluralist” and then offering a prioritized ranking of the three values, and even if that suspicion can be laid to rest, it still seems that an attempt to prioritize is going to be subject to scores of counterexamples (see, e.g., Nagel’s “The Fragmentation of Value”). (Perhaps I’ll actually think of some later…)

Leave a Reply

Your email address will not be published. Required fields are marked *