In this post, I shall argue for the conclusion that there is no such thing as moral vagueness. The argument rests on a certain assumption, which I myself believe to be true. The crucial assumption is that the fundamental ethical or normative concepts are all essentially comparative notions, like ‘__is better than__’ and ‘There is more reason for__than for__’, and the like.
If this assumption is true, there is no moral vagueness. The moral realm is as precise as the realm of mathematics. Locke’s notorious talk of “moral geometry” is to that extent entirely appropriate.
In her paper “Moral vagueness is ontic vagueness” (Ethics 2015), Miriam Schoenfield gives several alleged examples of moral vagueness. All of these examples involve alleged borderline cases of permissibility. One such example is the following:
It is impermissible to amputate a person’s arm [sc. without their consent] to save another’s life. It is permissible to amputate a person’s arm to save a billion lives. How many lives must be at stake for it to be permissible to amputate someone’s arm? Plausibly, we can create a Sorites series, admitting of borderline cases of permissibility, out of a series of amputations, each of which is performed to save an increasing number of lives.
If the fundamental ethical and normative notions are all essentially comparative, then every proposition about “permissibility” is equivalent to an explicitly comparative proposition.
Now, there may be several different ways of using the word ‘permissible’. But to keep things simple, let us suppose that as it is used in this context, ‘A is permissible’ is equivalent to ‘There is at least as much reason for A as for any available alternative B’. Thus, for A to be permissible is for it not to be the case that there is an alternative B such that there is more reason for B than for A.
Consider the series of cases that Schoenfield gestures towards (and calls a “Sorites series”). How are we to analyse this series of cases given this interpretation of “permissibility”?
Well, presumably:
- In the first case C1, there are two options: A1, which involves saving one person’s life by amputating another person’s arm [without their consent], and B1, which involves not amputating anyone’s arm and letting the first person die.
- In general, in every case Cn (for n = 1, …, 1,000,000,000), there are two options: an A-option, An, which involves saving n people’s lives by amputating one other person’s arm [without their consent], and a B-option, Bn, which involves not amputating anyone’s arm and letting those n people die.
In each case Ci, there is one reason against the A-option, Ai – namely, it involves amputating one person’s arm without their consent, and a corresponding reason in favour of the B-option – namely, it involves not amputating anyone’s arm without their consent. Call this the “reason not to amputate”.
Similarly, in each case Cj, there is one reason in favour of the A-option, Aj – namely, it involves saving j people’s lives – and a corresponding reason against the B-option – namely, it involves letting j people die. Call this the “reason to save lives”.
Presumably, as we go along the series from each case to its successor, the weight of the reason to save lives gets gradually greater and greater, while the weight of the reason not to amputate remains constant.
In the first case C1, the reason not to amputate is weightier than the reason to save lives – this is why the A-option is impermissible in this case.
In the last case C1,000,000,000, the reason not to amputate is not weightier than the reason to save lives – this is why the A-option is not impermissible in this case.
On this analysis, there has to be a last case in the series where the reason not to amputate is weightier than the reason to save lives. The immediately following case will be the first case where the reason not to amputate is no longer weightier than the reason to save lives. In other words, there is a sharp cutoff point on this series.
Note that this is true even if not all the “weights” of reasons are comparable with each other — so that these weights can only be partially ordered. Even if the weights are not all comparable and so only partially ordered, classical logic alone, together with the analysis of the case, implies that there must be such a cutoff point somewhere on the series.
Of course, we can never know exactly where on the series this cutoff point comes. But there is no reason to think that the explanation of our ignorance has anything to do with vagueness. The explanation seems to be exactly the same as the reason why we cannot always know that one mass is weightier than another mass, given the obvious limitations of our powers of discrimination. (E.g., suppose that you had to tell which mass was heavier by holding one mass in each hand, and trying to judge which mass feels heavier…)
‘Mass 1 is weightier than Mass 2’ is not intuitively vague (or if it is vague, it is only because of vagueness in the referring expressions ‘Mass 1’ and ‘Mass 2’, not because of any vagueness in the predicate ‘__is weightier than__’ or in the relation that the predicate stands for). However, even if it is true that Mass 1 is weightier than Mass 2, we cannot always know that it is true, given our limited powers of measurement.
In a similar way, ‘Reason 1 is weightier than Reason 2’ does not seem to me to be vague either. Our ignorance of its truth can be explained purely on the basis of our limited powers of discrimination. There is no reason to postulate any vagueness in this predicate ‘__is weightier than__’ or in the relation that it stands for.
For these reasons, then, I remain unpersuaded of the case that there is any such thing as moral vagueness.
This is the dodgy claim: “Now, there may be several different ways of using the word ‘permissible’. But to keep things simple, let us suppose that as it is used in this context, ‘A is permissible’ is equivalent to ‘There is at least as much reason for A as for any available alternative B’. Thus, for A to be permissible is for it not to be the case that there is an alternative B such that there is more reason for B than for A.”
Ralph says he’s investigating the possiblity of *moral* vagueness. He says “Miriam Schoenfield gives several alleged examples of moral vagueness” but then goes on to discuss cases of ‘generic normative permissibility’.
Clearly two options, A and B, can both be morally permissible even though there is more (generic) reason to perform one of those actions than there is to perform the other. Let’s assume it’s morally permissible for me go to either pub A or pub B. This is perfectly consistent with there being more (generic) reason for me to go to, say, pub A – unless of course we start taking *really* seriously the idea that we have a moral obligation to do what we have most (generic) reason to do. I think this would rule out supererogation. But perhaps that is as it should be?
Thanks, Kent!
As it happens, I completely agree with what you say here — at least if you are using ‘permissible’ in this context in such a way that it is a necessary truth that every available action that is not morally wrong is permissible.
When ‘permissible’ is understood in that way, let’s call it “moral permissibility”. The argument that I gave in my post concerns what we could call “generic permissibility” instead.
The analysis that I proposed in my post for “generic permissibility” won’t work for “moral permissibility”. So, I propose the following: ‘A is morally permissible’ is equivalent to ‘A is not morally worse than every generically permissible alternative B’.
This makes room for supererogation, because there could be two generically permissible acts, one of which is morally better than the other. However, since neither of these acts is morally worse than every generically permissible act (no act can be worse than itself!), neither of these acts is wrong — and so both acts are also morally permissible too.
However, this will only complicate my argument a bit — it will not invalidate it. I suggest that for A to be morally worse than B is for there to be more moral reason against A than against B. So, I can just run the same argument again, while stipulating that all the reasons involved are moral.
I agree that the argument for vagueness isn’t a sorites if one doesn’t allow for a notation such as ‘absolutely impermissible’, which in any is surely a problematic notion in any case (though it doesn’t follow from that that we might not consider a notion such as goodness in absolute terms, but I digress). However there are other arguments for moral vagueness and it strikes me as odd to say ‘Of course, we can never know exactly where on the series this cutoff point comes.’ and then also say that there is absolutely no reason why we couldn’t know this (are we just too stupid, or is there some barrier to moral progress ever achieving such a state of refinement that we could view such precise judgements as acceptable on the grounds that they really do reflect the moral fact of the matter).
I used to be very hostile to arguments for moral vagueness, but several years study, and more importantly about an hour duking it out with Parfit, have rather persuaded me that whilst I HOPE we might be able to make precise moral judgements about all cases, the fact that we not only can’t do this for a great many cases but also find moral theories that pretend that we can, for instance by offering multiple sharpenings, highly objectionable is at least some reason to think that there may not always be a precise moral fact behind every pairwise comparison of actions or outcomes.
Assume, as you allow, that our powers of discrimination are limited and consider a modification of Quinn’s case of the self-torturer (ST), in which I have to decide whether to move ST’s dial up to setting x in order to provide a needy youth with college tuition. Given my predicament, I wonder for which x’s moving the dial up to setting x is permissible, and whether there is an exact cut off point.
I know the following:
It is permissible to move ST’s dial up to setting 1 in order to provide for the youth’s tuition.
It is impermissible to move ST’s dial up to settings 1000 to provide for the youth’s tuition.
Suppose you say that, for some particular n, moving the dial to setting n-1 is definitely permissible, but moving the dial to setting n is definitely not permissible.
Problem: Since ST can’t, with any confidence, tell by the way he feels whether he has moved up from setting n-1 to setting n (given his limited powers of discrimination), and the youth will be benefited by the tuition funds, it seems like it must be false that moving the dial to setting n-1 is definitely permissible, but moving the dial to setting n is definitely not permissible.
Importantly, in this case, there is no harm to moving up the settings other than making ST feel different, which happens over multiple settings. It thus seems incorrect to say that there is an exact cutoff, but we just can’t know what it is; by hypothesis, there is no hidden relevant information.
Thanks, Simon!
To clarify, I am not saying that “there is absolutely no reason why we couldn’t know” where the sharp cutoff is. On the contrary, I think that there is a reason why we can’t know this.
Let w* be the weight of the reason not to amputate someone’s arm without their consent, and let w-n be the weight of the reason to save n lives. When m is the largest number such that w-m is less weighty than w*, then w-m and w* must be extremely similar weights — since w-n+1 is not less weighty than w*, even though w-n is less weighty than w*.
So, comparing these two reasons and judging which is weightier would be like holding a stone in each hand — where in fact, the stone in your left hand weighs 1 kg, and the stone in your right hand weighs 1.0001 kg — and trying to judge which stone is heavier just by how it feels. In that case, given how close the weights of the two stones are, it would be beyond the powers of any normal perceiver to tell which is heavier (even though in fact one is heavier than the other).
In my view, the explanation of our ignorance in the cases that Miriam Schoenfield presents as examples of “moral vagueness” is exactly like that.
Thanks, Chrisoula!
As a methodological point, I think that the case of the self-torturer is quite puzzling, and so I don’t believe that we should place a lot of credence in our initial judgments about the case.
In particular, I don’t think that we should on reflection accept that you can just stipulate that in this case, “by hypothesis, there is no hidden relevant information”. In fact, I think that there has to be such hidden relevant information.
The relation between experiences of “being more painful than” is, it seems, asymmetric and transitive, while the relation of being “equally as painful as” is reflexive, symmetric, and transitive. The case is described so that for every pair of experiences that the self-torturer can get from the device, either one experience is more painful than the other or they are equally painful.
So, given that e3 is less painful than e1, we cannot say both that e3 is equally painful as e2 and that e2 is equally painful as e1. Instead, either e3 more painful than e2, or e2 is more painful than e1, or both. But the self-torturer cannot tell the difference between e3 and e2, or between e2 and e1 (though he can tell the difference between e3 and e1). So, there is “hidden information” after all!
This hidden information is also “relevant”, as you put it, because other things being equal, the more painful experiences are also worse, and there is (at least slightly) more reason against inflicting the more painful experiences than against inflicting the less painful experiences.
For these reasons, I am pretty confident that this analysis is also the right thing to say about the self-torturer case as well.
Apologies in advance for this long comment, Ralph – I’m unfortunately not the most concise person ever.
I think you’ve done a good job here of undermining the case for moral vagueness, but it doesn’t seem to me that you’ve shown conclusively that, given your assumption that the fundamental normative and ethical notions are essentially comparative, there cannot be any moral vagueness. Or if you have, I’ve missed it.
As far as I understand it, your argument is that if the fundamental normative and ethical notions are essentially comparative, then propositions about permissibility are equivalent to explicitly comparative propositions. These propositions cannot be vague, so neither can propositions about permissibility (and ditto for any other normative or ethical notion, since they can all be reduced to the fundamental, essentially comparative, notions). The tacit assumption here is that essentially comparative notions can’t be vague. To be clear, that seems to me probably true, but as far as I can tell, you haven’t demonstrated it here.
Assume for a moment that essentially comparative notions can be vague – and in particular, that it can be indeterminate whether one reason is weightier than another. Then consider Schoenfield’s case. There then may not be a first case C_n such that the act of amputation is determinately permissible in C_n but determinately impermissible in C_{n+1}. This is because the reason to avoid amputating an arm without consent could be weightier than the reason to save n lives, yet it could be indeterminate whether the reason to avoid amputation is weightier than the reason to save n + 1 lives.
I agree with you that it is not an intuitively vague matter whether one reason is weightier than another. And I agree that your arguments show that an appeal to vagueness is not needed to explain our ignorance of where the (purported) sharp cutoff lies in cases like this. But I just don’t see that it has been shown conclusively that this cannot be vague.
The anology of weight relations between reasons with weights relations between masses is suggestive, but I don’t know how seriously we should take it. For one thing, we’re not assuming that weightiness of reasons is complete, but weightiness of masses is. So the anology can lead us astray in some areas. It could, for example, be that mass A is much weightier than mass B, but only slightly weightier than mass C. From this, we can conclude that mass C is weightier than mass B. But the anologous claim for reasons may not hold, because reason B and reason C could be incomparable in weight (or on a par, or roughly equal, or fill in your favourite notion here). That’s compatible with reason A being much weightier than reason C, but only slightly weightier than reason B.
Can your opponent reasonably hold that there are vague essentially comparative moral notions, though? It’s at least not obvious to me that they can’t. Here are a few possibilities:
i) It could be that, given two very different sorts of pleasures, it can be vague whether one is a greater pleasure than the other. Compare the pleasure of listening to music with the pleasure of eating food. Imagine a sorites-like sequence, comparing the pleasure of eating a progressively more delicious cake (starting with a drab cake) to listening to a good-but-not-great piece of music. The drab cake is less pleasurable than the music, and a sufficiently delicious cake will be more pleasurable than the music, but it might plausibly be claimed that a tiny improvement to the cake cannot make the difference between “less pleasurable” and “more pleasurable”. (Of course, your arguments about there being no need to appeal to vagueness to explain our ignorance of any sharp cutoff, when we can appeal to our course-grained senses, apply perfectly here.)
ii) It could be that persons matter, persons are physical (and let’s also identify them with their brains), and it is a vague matter which molecules count as part of the brain. It might then be that different precisifications of the brain are in slightly different experiential states, and that the difference between these states can make a moral difference.
iii) Interpersonal comparisons of welfare might be vague – it might sometimes be indeterminate whether one person’s life is better than another person’s life (even though, when ramped up suitably, these comparisons can clearly be made).
I’m not advocating any of these (in fact I don’t believe in any of them), but it doesn’t seem to me obviously wrong that at least one of these could be vague.
All that said, your post does seem to me to undercut the case for moral vagueness very effectively. But then I never found the case for moral vagueness at all persuasive in the first case.
Side notes:
i) You characterise the (generic) permissibility of A as “There is at least as much reason for A as for any available alternative B”, and then as (equivalently) “there is no B such that there is more reason for B than for A”. These are not equivalent if weights of reasons can be incomparable – and I think you want the second rather than the first definition. Intuitively, if there are two options and the reasons for each option are incomparable in strength, both options should be (generically) permissible.
ii) About your answer to Kent: your characterisation of moral permissibility rules out its application in what I think is the right way for cases of so-called suboptimal supererogation, as discussed recently by Joe Horton, Theron Pummer, Jeff McMahan, and others. In a case of suboptimal supererogation, there are at least three options: A, B and C. A seems to be permissible, C is supererogatory, and B is some option that is clearly, and morally, worse than C but seems to be better than A. For instance, A could be making no sacrifice, B could be making a large sacrifice to save one life, and C could be making the same large sacrifice to save two lives (including the one saved in B). Because B is morally better than A, and A is generically permissible, you are committed to saying that B is morally permissible. This strikes me as the wrong answer: I want to say that B is morally impermissible. Relatedly, you’re committed to moral permissibility violating contracition consistency (Sen’s beta condition), because if we remove a generically permissible option that B is not morally worse than, this can result in B becoming morally impermissible.
So, if Kent is right that generic permissibility doesn’t make room for supererogation, you’ve ruled out what seems to me to be (part of) the correct account of supererogation. But I don’t see why your account doesn’t have room for supererogation. As you say, there could be two acts which are generically permissible, one of which is morally better than another. That seems to me like it’s enough already – I don’t quite see the need to bring in this separate notion of “moral permissibility” (although no doubt you have other reasons to do so) to deal with this. I’m more worried that acts like shifting a benefit from myself to someone else will turn out to be generically impermissible on your account, because I have prudential reason against it and no moral reason for it (this will be morally permissible by your definition, but I’m not happy with that notion – see previous paragraph).
Hi, Ralph. Thanks for this interesting post!
A question: is your view that if we assume that if “the fundamental ethical or normative concepts are all essentially comparative notions”, then that *guarantees* that there is no moral vagueness? Or do you also want to assume that the fundamental normative concepts are not in some sense “multi-dimensional”?
Compare: if baldness depends only on the number of hairs on one’s head, then ‘is bald’ can be vague if it’s vague exactly how many hairs it takes not to be bald. But ‘is balder than’ won’t be vague, since it’s always determinate whether x has more hairs on their head than y (at least setting aside vagueness in ‘hair’ and ‘head’ and so on).
By contrast, if baldness depends both on number of hairs and on the distribution of hairs, then both ‘is bald’ and ‘is balder than’ might be vague. For it might be vague how number and distribution of hairs “compare to” or “are weighed against” each other—not sure what metaphor is best—when, say, x has more hairs than y but x’s hairs are distributed in a more baldness-inducing way (whatever that might mean).
Similarly, at first glance it seems that the assumption that the fundamental normative concepts are comparative is compatible with the view that they are also multi-dimensional in such a way that, e.g., ‘better than’ is vague to due vagueness in how those dimensions compare. So I was wondering if you assume that they are one-dimensional (which, while perhaps true, seems like a substantial assumption), or if you think that your argument works even if they are multi-dimensional in the way hinted at above?
Thanks again!
Thanks, Ralph. The question I would ask in response to your reply is whether you think there is anything that would stop us from, someday, finding a means of weighing these competing reasons precisely in our moral calculus? It seems to me that you believe that there is such a barrier, but your conviction that morality is not vague stems from the fact that you believe that this is somehow related to the ways in which humanity interacts with reasons, rather than to any fact about the reasons themselves. If we could invent a precise reason detector (perhaps invoking some sort of ethical AI or enhanced humanity that had greater cognitive, emotive, sensory and analytical capacities to humanity) then you want to say that we really could determine the precise number of lives that would justify amputating an arm (potentially even down to the moral loose change that would make the strength to the two reasons exactly comparable, e.g. 65 lives, 2 saved legs, 43 toes and 3,420 hangnails). However, unfortunately, you do not think that such a detector is possible. Whilst human beings can create precise scales that will accurately tell us the difference between a 1kg weight and a 1,000.002g weight, our weighing of moral reasons must, of necessity, be done with our innate capacities and instincts – the analogue of trying to differentiate these two weights by holding one in each hand. I take it that this reflects your other beliefs about the nature of reasons.
Let me be clear, I do not find this kind of explanation uncompelling. However, from what I currently understand about the nature of reasons I have to say that I don’t find it any more compelling an explanation than that there is something about these reasons that even a superhuman reason weighing device may not be able to precisely determine. Reasons come in many different shapes and sizes, and it could be that there is simply no common scale of ‘weight’ that can subsume all these differences into a single measure.
As per Olle’s comment re multi-dimensionality, I feel we must be close to the welfare choice problems eg which of 3 individuals do we save from drowning – youngest, oldest, most disabled, randomize? Maybe this is intractability rather than vagueness.
Hi Ralph,
This seems to beg the question. You write:
‘A is permissible’ is equivalent to ‘There is at least as much reason for A as for any available alternative B’. Thus, for A to be permissible is for it not to be the case that there is an alternative B such that there is more reason for B than for A.
Even granting the assumption that normative concepts are essentially comparative, the defender of normative vagueness should deny this inference. For it is still consistent with that assumption that a pair of dominant alternatives A and B can be in equipoise, such that none of the following are true:
(1) There is more reason for B than for A.
(2) There is more reason for A than for B.
(3) There is equal reason for A as for B.
Presumably, there is at least as much reason for A as for B iff (2) or (3) is true. You’re claiming that (2) or (3) is true just in case (1) is false. But in this, you’re already assuming that it is impossible for A and B to be in equipoise, which is of course exactly what the defender of normative vagueness should assert.
Furthermore, your argument that there has to be a sharp cutoff point in the amputation series (note the opportunities for puns) likewise seems question-begging. The A-option is permissible in the last case because there is at least as much reason for A as for B, not because there is not more reason for B than A. Again, however, there may be cases between the first and last case in which A and B are in equipoise. If so, there will be no sharp cutoff point.
Finally, consider your analogy between weights of masses and weights of reasons. In my view, the reason we are right to attribute our inability to know that one mass is weightier than another mass to our limited powers of discrimination is that we have a positive account of the facts that ground weights of masses (namely, physics) from which it follows that these weights are always determinate (at least outside of the quantum scale). However, it is not obvious to me that we have an analogous positive account of the facts that ground weights of reasons. In the absence of one, it strikes me as perfectly plausible that our inability to make precise discriminations in these cases reflects a genuine indeterminacy in the phenomena.
Hi Ralph – if you have a moment, could you clarify or explain a bit more why this is supposedly the case:
“The crucial assumption is that the fundamental ethical or normative concepts are all essentially comparative notions, like ‘__is better than__’ and ‘There is more reason for__than for__’, and the like.”
As a consideration: is goodness always comparative? For example, helping my sick grandmother on the weekends is good, and of course, it is also better–morally speaking–than playing paintball or drinking bloody marys all day, but I don’t think the goodness of helping grandma consists in its being better than paintball or drinking. It just seems good on its own. Thanks!
So in the original sorites argument the premise that removing one grain of sand isn’t enough to change a heap to a non-heap is backed by the idea that this would involve bizarre arbitrariness. Tacitly, most vagueness arguments are like this in that they appeal in some way to the idea that brute arbitrariness is worse than vagueness. You can just appeal to the excluded middle to insist that there is some number n of grains at which the heap/non-heap shift occurs. But the result of such a general argument is that you have to be open to the possibility that it is just a brute arbitrary fact: i.e., not only are we ignorant of what n is, but there may simply be no explanation whatsoever for why the number is n rather than n-1 or n+1. It seems that your appeal to comparative notions does nothing to mitigate this. To escape brute arbitrariness you need to say that there are principled explanations for why our reasons have the specific weights that support some sharp cutoff rather than another. Benthamite hedonism might be one way of doing this: the total quantity and duration of pleasurable vs. displeasurable may shift from saving 17 lives to 18 lives, even though it is very difficult for us to actually measure the relevant psychological states to discern that this is the case. But the problem there is that you should probably just immediately amputate to save one life, which is what makes Benthamite utilitarianism so controversial. But without something like Bentham’s measure of the good, it seems like you are just appealing to classical logic and accepting as a conclusion that there might be brute moral arbitrariness. But as far as I can tell, once you’re willing to accept brute moral arbitrariness all the work is done. There is no need to appeal to comparative notions. Maybe I’m missing something?
Gosh, a lot of comments for me to answer! I will go through these comments in order, starting with Tomi.
Tomi – I didn’t mean to claim that I have an incontrovertible proof that these comparative relations are non-vague. But I did mean to show that it is quite plausible to claim that classical logic applies to statements of the form ‘Reason R1 is weightier than reason R2’.
If classical logic does hold, then, for every case in this series, of the two relevant reasons R1 and R2, either the reason R1 is weightier than reason R2, or R1 is not weightier than R2. (I.e., the relevant instance of the Law of Excluded Middle holds.)
Note that this does not assume that the weights of reasons are all comparable. Even R1 is not weightier than R2, it also need not be true that R2 is weightier than R1, or that R1 and R2 are equally weighty. It might be that none of these comparative relations hold. In this case, R1 and R2 are incomparable in weight.
However, even so, we still get a sharp cutoff point – the point between the last case where R1 is weightier than R2, and the first case where these two reasons are incomparable.
In short, the only assumption that I need to show that there is a sharp cutoff point is that in each of these cases, each of these relations either holds or doesn’t hold.
I would apply the very same picture to the other examples that you bring up, such as comparisons between the experiences of eating food and of listening to music. Incomparability doesn’t show that there is any real vagueness.
Similarly, with interpersonal comparisons, the point is that it might not be true either that X is better off than Y, nor worse off than Y, nor equally as well off as Y. (Indeed, it might also be that the degree to which state of affairs S1 is better for one person X than an alternative state of affairs S2 might be neither greater, nor less, nor equal to, the degree to which S2 is better for a second person Y than S1.) But doesn’t show that there is no sharp cutoff point between the last case where X is better off than Y, and the first case where it is not the case that X is better than Y!
Tomi, on your side notes:
i. Yes, you’re quite right: to make room for incomparability, the analysis of “A is permissible” should be formulated as “There is no B such that there is more reason for B than for A”, and not as “There is at least as much reason for A as for any available alternative B”.
ii. You’re quite right that in my reply to Kent, I reveal my disagreement with Joe Horton, Theron Pummer, and Johann Frick (among others) about suboptimal supererogation. I think it’s outrageous to say that such suboptimal supererogation is morally wrong. (How dare these philosophers go around blaming people who are making such great sacrifices to do good, just because these people mess up a little bit, and so fail to do the most good that they could? I actually find their attitude at least mildly offensive…!)
Incidentally, I don’t mind the fact that on my analysis, moral permissibility violates contraction consistency. But that’s a different story — it would take us too far afield to discuss this issue here…
Ollie —
If anything, I am positively assuming that many of our comparative evaluative concepts *are* multi-dimensional.
After all, the comparative evaluative concept ‘There is more reason for __ than for __’ obviously depends on the aggregation of the different reasons for and against the two options in question, and these different reasons can be thought of as “dimensions” of this comparative evaluative notion.
So, multi-dimensionality doesn’t do anything to stand in the way of the argument for sharp cutoff points, it seems to me!
Thanks, Simon!
My idea is that Timothy Williamson’s general idea (from “Knowledge and its Limits”), about the limits of our powers of discriminating between very similar cases, applies to ethics just as much as to everything else.
The key point is that last case where the non-maleficence reason – the reason not to amputate – is weightier than the beneficence reason – the reason to save lives – is extremely similar to the first case where the non-maleficence reason is not weightier than the beneficence reason. So it is just inevitable that it will be very hard for us to tell these cases apart.
By the way, even our most finely calibrated measuring tools have their limits too. Some differences in mass are too tiny for even these amazing devices to measure. So, while these limits to our knowledge can be pushed back, they will never completely disappear!
David –
You ask about a case where we have to choose which of 3 individuals to save from drowning – the youngest, the oldest, or the most disabled. Sometimes, I would say, this case isn’t even intractable!
Suppose the youngest individual is completely healthy, while the oldest person and the most disabled person are sure to die of a painful disease next week if we save them from drowning now. Then, obviously, you should save the youngest. Admittedly, we can construct cases where cases like this are much harder, so that in these hard cases we can’t know for certain what the right thing to do is. But again, that would be unavoidable ignorance, not vagueness.
Ben –
Most of your comment is answered by my first reply to Tomi (see above). So, my argument doesn’t beg the question in the way that you describe (although at one point I should have been a bit more careful in my formulations…).
I disagree with your claim that our confidence that the relation between masses of “being heavier than” is non-vague rests on our knowledge of physics. Even before modern physics, it seems to me, ancient mathematicians like Euclid or Archimedes would have been able to see that this relation is non-vague (at least so long as the two masses being compared have precise boundaries).
In principle, the complete ultimate truth about morality and ethics could provide as much of an explanation as can be given of the precise locations of these sharp cutoff points. But I don’t think that we need to wait for the complete ultimate truth about ethics to see that such sharp cutoff points must exist.
Michael –
The “crucial assumption” that my argument rests on is essentially the one that John Broome has defended, and called the idea that “goodness is reducible to betterness”.
If this assumption is correct, there is no such thing as absolute goodness. To say that something (like your visiting your grandmother) is “good” is just to say that it is at least as good as the contextually salient standard. On this view, ‘good’ is like ‘tall’: to say that someone is “tall” is just to say that they are at least as tall as the relevant standard of comparison – there is no absolute tallness.
This view is undoubtedly controversial. Some people firmly believe in absolute categorical goodness. But I wasn’t defending this view here: I was just explaining one of this view’s consequences – if this view is correct, there is no moral vagueness.
Graduate Student –
What sort of explanation of these “weights” could there be? Well, in the case that I was focusing on, these weights express the relative importance of non-maleficence (not harming people) and beneficence (doing good, e.g. by saving lives).
There are some extreme views on this issue: E.g., utilitarians think that non-maleficence has no special importance except in so far as it is a form of beneficence; and those who think that we have an absolute indefeasible right not to be harmed would hold that reasons of non-maleficence have absolute priority over reasons of beneficence. But Miriam Schoenfield and I were both assuming that these two extreme views are wrong. So it seems that both kinds of reasons have non-trivial weights of some kind.
Within the comparativist framework, there is going to be a certain mathematically describable pattern that captures the relative importance of these two kinds of reasons. The pattern will presumably match our intuitions in the easy cases. But the pattern will also extend into some of the less easy cases too, and (as I have argued, in replying to Simon above) we will never know for sure exactly what this pattern implies for these hard cases.
Ultimately, of course, explanations must come to an end, even in ethics. If this is “arbitrariness”, than we just have to live with it. In general, I confess that I don’t quite know what you mean by “arbitrariness” here. (We presumably can’t give any deeper explanation of why suffering pain for a longer time is worse than suffering an equally intense pain for a shorter time: so is this arbitrary?) But I don’t see why arbitrariness itself guarantees that there is no vagueness. (If there is vagueness, why wouldn’t that fact itself be arbitrary?)
What I was trying to argue here is that the comparativist framework makes it more plausible that classical logic applies to these cases, and that shows that there must be sharp cutoff points, and so no vagueness.
Hi Ralph,
Thank you for the really interesting post. I’ve been puzzling over some similar things recently, especially concerning how ‘spectrum arguments’ (e.g. those explored at length in Temkin’s *Rethinking the Good*) relate to sorites arguments.
Re your response to Graduate Student, I agree that, if we rule out those ‘extreme views’ that disallow tradeoffs, and we accept classical logic, then we must accept a sharp cutoff. One might accept a sharp cutoff, but nonetheless take oneself to have incurred some cost in doing so (since the sharp cutoff would, as Graduate Student suggested, be inexplicable). Or do you think that accepting sharp cutoffs here is a cost-free solution? I guess I’d be inclined to think of it as a cost, even if in some cases a cost worth paying.
A more general question: are you able to say more about why you think it is more plausible that classical logic applies in the comparative case (e.g. ‘_is better than_’, ‘_is balder than_’) than in the noncomparative case (e.g. ‘_is good’, ‘_is bald’)? Is it that classical logic more plausibly applies in all comparative cases, or only a subset of them?
Olle noted that there might be two dimensions relevant to ‘is balder than’ – number and distribution of hairs. Suppose that while we think these dimensions trade off against each other within a range of cases, we also accept a kind of ‘extreme view’, according to which, once a head’s distribution reaches a certain point, then, no matter how many hairs it has, it is balder than any head with a distribution that has not yet reached this point, provided it has enough hairs. Maybe this is mistaken, but I can imagine many people wanting to say that the plausibility of a sharp cutoff in this example featuring ‘is balder than’ is about the same as the plausibility of a sharp cutoff in the standard sorites featuring ‘is bald’. So I’d be curious to know your thoughts here. Apologies if this is confused, but here’s some further speculation:
One reply is that, in this example, the kind of sharp cutoff is of a different character to that in your original example. Your original example was about the precise tradeoffs between dimensions that do trade off against each other, not about the precise location where tradeoffs can no longer be made (because one dimension reaches some critical point). But I think I’d want to hear a bit more about how these different sorts of sharp cutoffs are of differential plausibility (if that’s what you think).
One thing you could do is reject sharp cutoffs between cases where dimensions trade off and cases where they don’t. But if you reject such sharp cutoffs, and reject the claim that there is a vague transition between these cases, it looks like you’ll be forced into a kind of ‘all or nothing’ view about tradeoffs, according to which, if two dimensions trade off against each other in any cases, then they trade off against each other in all cases. It looks like this is going to force us to say many controversial things in ethics, like that there’s some number of headaches, each had by a separate person, that are together worse than one person’s death. I suspect that this isn’t the way you want to go – though correct me if that’s wrong! If it’s not the way you want to go, then again I think I’d want to hear more about how a sharp cutoff point between cases where tradeoffs relevant to ‘is balder than’ (or ‘is better than’) can be made and cases where they can’t is more plausible than the standard sort of sharp cutoff point relevant to ‘is bald’ (or ‘is good’).
Thanks, Theron! Your comment raises a lot of interesting issues.
1. Is implying the existence of a sharp cutoff point a “cost” of a moral theory? I don’t really think that it is. The only way to minimize the number of unexplained fundamental principles in an ethical theory is to go with a very simple theory like classical utilitarianism. But such simple theories are too simple to provide an adequate explanation of the intuitive data. All the simplest theories that do provide adequate explanations of the data will involve such sharp cutoff points. So although it may appear initially as a “cost”, further reflection reveals that it is not.
2. Why do I think that the view that the fundamental ethical notions are essentially comparative makes it more plausible that classical logic applies? I should have said more about this. The key point is that we are perfectly familiar with the fact that one item can be F-er (e.g. longer or heavier or faster…) than another, but only so very slightly F-er than the other item that we can’t tell that it is F-er. So, it seems to me, there is much less intuitive difficulty in recognizing that, even if we can’t detect which is the case, either reason R1 is weightier than another than R2, or reason R1 is not weightier than R2, than there is in recognizing that either act A is permissible or it is not permissible.
3. Your comment ends with some questions about how to make sense of tradeoffs between different “dimensions”. In the example that I’m looking at, these “dimensions” are fundamentally just different kinds of *reasons*, which can be aggregated to yield an all-things-considered conclusion about *how much* reason there is for each of the available options.
In my analysis of the example, I assumed that the fact that the B-option involves saving fewer lives than the alternative is always a reason against the B-option, and the fact that the A-option involves amputating someone’s arm without their consent (while the alternatives do not harming that person at all) is always a reason against the A-option. In my analysis, these two reasons are always weighed against each to produce an all-things-considered conclusion about *how much* reason there is for each option.
In theory, however, we could allow for principles that explain why certain factors (like saving fewer lives, etc.) give rise to reasons in some cases but not in others. But if we do allow for such principles, then it seems that there has to be a sharp cutoff point between the last case where the factor does provide a reason and the first case where it doesn’t. This kind of sharp cutoff strikes me as much more problematic – since it involves a radical discontinuity between cases that are otherwise extremely similar, and so can’t be interpreted as analogous to the familiar cutoff points (like those that I was exploring in my original post) which just arise from factors that vary continuously from one case to another. So cutoff points of this sort really are a “cost” (or so I think).
For this reason, I am inclined to think that it is better just to accept that there is a number of headaches that is worse than one death – and to explain away any temptation to think otherwise as an illusion.
However, I don’t believe that I’m committed to this position just by what I said in my original post. The arguments of the original post are consistent with a number of different ways of thinking of these tradeoffs between different reasons.
Many thanks for this helpful reply, Ralph! I would agree that sharp cutoffs are more plausible when the things they sharply cut between are fairly similar (as you say, the difference between ‘reason R1 is weightier than R2’ and ‘R1 is not weightier than R2’ can be slight). The case for avoiding *these* sorts of sharp cutoffs by positing moral vagueness is, perhaps, a weak one. But what would you say to someone who accepts your comparativist picture but still wants to both (i) avoid the more radical sorts of sharp cutoffs you mentioned in the third part of your reply to me, and (ii) avoid trading lives for headaches? If it turned out that we could not adequately explain away the intuition that one death is worse than any number of headaches, would we then have a good case for moral vagueness?
Well, it depends what you mean by “avoiding trading lives for headaches”.
As it happens, I think that comparisons between outcomes that involve different levels of harm for the *same* people generate reasons in a much more straightforward way that comparisons between outcomes in which different numbers of people exist. So, it would to be the case that the cases in which one outcome involving a person’s death is a better than a second outcome that involves a certain number of headaches would have to be cases in which everyone who has the headache in the second outcome also exists (without the headache) in the first outcome.
There might also be non-contingent limits on the number of people whom any agent could ever significantly affect. It might be that this number is smaller than the number of headaches that would be worse than one death. So, it might be a non-contingent matter that no real agent will ever be justified in choosing to save any number of people from headaches rather than one person from death.
However (you might reply), these manoeuvres are compatible with its still being true (albeit in a practically irrelevant way…!) that a world in which in which a sufficiently huge number of people suffer headaches, and an additional person does not die prematurely, is impersonally worse than a second the additional person does die prematurely, and that huge number of people do not suffer headaches.
One response would be to follow Judith Thomson and raise doubts about the intelligibility of this practically irrelevant notion of the “impersonal goodness” of a whole world.
In fact, I am inclined to think that this Thomson-inspired response would be overkill. However, I do doubt the reliability of alleged intuitions about such practically irrelevant questions about which of two worlds with jaw-droppingly vast population is “impersonally better”. So, suppose that someone claims to have the “intuition” that a world in which 10^(10^10) people suffer headaches, and an additional person does not die prematurely, is not worse than a second world in which those 10^(10^10) people do not suffer headaches, but the additional person does die prematurely. I suggest that we should greet this claim with a big dose of scepticism!
Hi Ralph,
Thanks for all the replies, both to me and others, and especially for your last reply to Theron, part 2. I’ve got a much clearer picture of your argument now. Still, I want to take issue with just a couple of things.
1) Classical logic, by itself, does not immediately guarantee bivalence. If classical logic applies to comparisons of strength of reasons, we do get that either R1 is weightier than R2, or R1 is not weightier than R2, for any two reasons R1 and R2. But we get that as a single sentence – it’s formally distinct from the claim that one of the disjuncts in fact holds. Supervaluationist accounts of vagueness validate classical logic, and will say for example that all sentences of the form “H is a heap or it is not a heap” are true, but there will still be some candidates H for “heap” such that neither of the disjuncts “H is a heap” and “H is not a heap” are (super)true.
Now, I happen to think that any view on which a disjunctive statement can be (super)true even though neither of the disjuncts is (super)true should be rejected, precisely because “at least one of the disjuncts is true” is just what it means for the disjunction to be true. But the supervaluationist approach is very much extant, so I don’t think it can just be ignored.
Is something like supervaluationism plausible in the moral case? Well, I don’t think so, but here’s one picture. When there are very different sorts of reasons at play, we might think that there just is no fact of the matter about how exactly they weigh against each other. There might be multiple weightings of certain sorts of reasons against other sorts such that the weighting must be in this range, but it is indeterminate which one it is. Precise weightings within the range will be analogous to “sharpenings”, and a sentence of the form “R1 is weightier than R2” will turn out to be (super)true just in case it is true on all admissible weightings of the relevant reasons. When different admissible weightings disagree about whether R1 is weightier than R2, the sentence will be indeterminate.
I don’t think there’s particularly good reason to hold such a view. But then I don’t really see the problem with sharp cutoffs in the first place, even if notions like permissibility are not reducible to essentially comparative notions.
(Also, more would have to be said to even avoid this on the supervaluationist-like picture – if it isn’t vague which weightings count as admissible, there will still be sharp cutoffs, just going from Det(R_1 > R_2) to not Det(R_1 > R_2) rather than from (R_1 > R_2) to not (R_1 > R_2).)
I should also point out that while it is similar to supervaluationism, this view is not (or may not be) a variety of it. For the supervaluationist, there are multiple admissible sharpenings because our language fails to pin down a single precise property. On this view, there are multiple admissible weightings because the differences in “sorts” between reasons yield indeterminacy in how they weigh against one another (this is not satisfying, but probably a more competent philosopher could provide a more convincing explanation of how indeterminacy could arise here).
If one is convinced that moral vagueness is a problem, one could hold the above view. Still, I agree with you that there is not so much intuitive difficulty in recognising that there could be sharp cutoffs for sentences about the comparative strengths of reasons, since when the differences in strength are very small, it makes perfect sense that it can be very hard for us to tell which reason is stronger. But I don’t think the lesson here is that “classical logic applies to essentially comparative notions”, firstly because (although this is somewhat controversial) classical logic applies everywhere, and secondly because classical logic doesn’t guarantee non-vagueness. Rather, I think the lesson is that bivalence holds for essentially comparative notions.
2) In bringing up my three examples, I didn’t mean to suggest that it is clearly true that indeterminacy arises in them, especially since I don’t think indeterminacy arises! As you say, at least two of them can be explained by positing incomparability rather than indeterminacy. My point wasn’t that this is the wrong move to make, or even that it is a less plausible move to make (I think it’s more plausible than positing indeterminacy) – I only meant to suggest that claiming indeterminacy in these examples is *a* move that one could make, and it’s not obviously the wrong one. And in the second case – “persons are brains” combined with vagueness as to what counts as part of the brain, and hence part the person, and the moral significance of such vague matters – I don’t think we can make the usual appeal to incomparability. I’m not worried because I don’t believe in indeterminacy anywhere, and I also don’t think persons matter. But I think someone without either of those get out of jail free cards maybe should worry.
Thanks again, Tomi!
Well, I am convinced that classical logic leads ineluctably, via the relevant instances of the T-schema, to bivalence.
The T-schema, ‘T(P) P’, implies ‘¬T(P) ¬P’.
Another instance of the T-schema is: ‘T(¬P) ¬P’.
The definition of falsity is: ‘F(P) T(¬P).’
Putting these three biconditionals together, we get ‘¬T(P) F(P)’ — which looks to me like a statement of bivalence…
I am sorry for not responding to your second example, about the idea that “persons are brains” and the apparent vagueness of references to brains. There are quite a few ways for the opponent of moral vagueness to go on this issue, I think.
My preference would be to say that identity does matter, in a sense. However, identity does not matter intrinsically, but only because it is non-contingently correlated with something else that matters, which (unlike identity) comes in degrees. (So, dying in the normal way is much worse than ceasing to exist because one undergoes fission.) At all events, this is the general direction in which I would like to go in responding to your second example.
Hi Ralph,
Thanks again for your reply.
For my part, I would state bivalence in a metalanguage as:
For all propositions P, either “P” holds, or “not P” holds.
This can be false even if, for all propositions P expressible by phi, the sentence “phi or not phi” expresses a true proposition. And it can also be false even if the sentence “it’s not true that phi if and only if it’s false that phi” (every instance of your bivalence schema) expresses a true proposition. The problem with having excluded middle but not bivalence, as I see it, is that it requires the “or” of the metalanguage to come apart from the “or” of the object language in an implausible way – we really do have to treat the meta “or” differently to the object “or”, but it seems like “or” should have the same meaning in both cases. (The meta “not” will also have to come apart from the object “not”, and arguably this is the source of the divergence).
Still, while we both agree that excluded middle *does* lead to bivalence, the opposite view is very much out there. For instance, in the philosophy of mathematics, plenty of people believe both that the Continuum Hypothesis (or some other sentence of set theory) is indeterminate, even though “CH or not CH”, and every other instance of excluded middle, is determinately true. So while I totally agree with you about bivalence, I think you’ve got good reason to construe your argument as being directly aiming for bivalence, rather than for the applicability of classical logic. By doing this, you can skip over all of the philosophical logic stuff. It will also make your argument persuasive to someone who believes classical logic holds universally, but nevertheless believes that there is some (non-epistemicist) vagueness – even if (as I think) they are wrong to hold both of those positions at the same time.