Suppose that a subject, S, is in some less-than-ideal epistemic position with respect to both the relevant normative facts and the relevant non-normative facts – that is, assume that S faces both normative uncertainty (i.e., uncertainty about the relevant normative facts) and non-normative uncertainty (i.e., uncertainty about the relevant non-normative facts). Yet S must still choose which of the mutually exclusive and jointly exhaustive act alternatives available to her to perform. Call this S’s choice situation.
Theories about what S ought to do in her choice situation fall into one of the following three categories:
(1) Purely Objective Theories (PO theories): These hold that the permissibility of S's doing x is affected neither by S's uncertainty about the non-normative facts nor by S's uncertainty about the normative facts.
- A PO theory tells us what no conscientious person would do if she faced S’s choice of alternatives but had certain knowledge about all the relevant normative and non-normative facts, for no conscientious person would do what it is PO-theory-impermissible for S to do if she faced S’s choice of alternatives and had certain knowledge about all the relevant normative and non-normative facts.
- Example of a PO theory: Objective utilitarianism – S’s doing x is permissible if and only if S’s doing x will maximize utility.
(2) Hybrid Theories (H theories): These hold that the permissibility of S's doing x is affected either by S's uncertainty about the non-normative facts or by S's uncertainty about the normative facts, but not by both. There are two types of H theories: H1 theories and H2 theories. On H1 theories, the permissibility of S's doing x is affected by S's uncertainty about the non-normative facts, but not by S's uncertainty about the normative facts. On H2 theories, the permissibility of S's doing x is affected by S's uncertainty about the normative facts, but not by S's uncertainty about the non-normative facts. I’ll focus on H1 theories, since I know of no one who endorses any H2 theory.
- An H1 theory tells us what no conscientious person would do if she faced S’s choice of alternatives, shared S’s uncertainty about the relevant non-normative facts, but had certain knowledge about all of the relevant normative facts, for no conscientious person would do what it is H1-theory-impermissible for S to do if she faced S’s choice of alternatives, shared S’s uncertainty about the relevant non-normative facts, but had certain knowledge about all of the relevant normative facts.
- Example of an H1 theory: Subjective utilitarianism – S’s doing x is permissible if and only if S’s doing x would maximize expected utility.
(3) Purely Subjective Theories (PS theories): These hold that the permissibility of S's doing x is affected both by S's uncertainty about the non-normative facts and by S's uncertainty about the normative facts.
- A PS theory tells us what no conscientious person would do if she were in the exact same situation that S is in, for no conscientious person would do what it is PS-theory-impermissible for S to do if she were in S’s situation.
- Examples of PS theories: Theories such as those developed by Ted Lockhart in his Moral Uncertainty and It’s Consequences, by Andrew Sepielli in his “What to Do When You Don’t Know What to Do,” and by Michael J. Zimmerman his Living with Uncertainty.
I can see that it’s useful to develop a PS theory for at least two reasons. First, a PS theory would be useful to agents in deliberating about what to do. By contrast, neither a PO theory nor an H theory would be useful to agents in this way, because agents have to deliberate given their actual epistemic positions, which often involve both normative and non-normative uncertainty. Second, a PS theory would be useful in determining when normative criticism is appropriate. Those who do what is PS-theory-impermissible are necessarily open to normative criticism. If no conscientious person would do what is PS-theory-impermissible in S’s situation, then anyone who does what is PS-theory-impermissible in S’s situation is open to normative criticism. By contrast, neither those who do what is PO-theory-impermissible nor those who do what is H-theory-impermissible are necessarily open to normative criticism. It seems to me, then, that a PS theory gives us a theory about what S subjectively ought to do.
I can also see why it’s important to theorize about what the correct PO theory is. Only by doing so can we hope to resolve some of our normative uncertainty, and that seems like something we should do qua philosophers. I believe that a PO theory gives us a theory about what S objectively ought to do.
But what’s useful or interesting about H theories? I can’t see that there’s anything. Yet such theories are quite popular – or, at least, those of the H1 variety are. Many philosophers (consequentialist and non-consequentialist alike) accept theories according to which the permissibility of an act is affected by the agent’s non-normative uncertainty, but not by the agent’s normative uncertainty. For instance, many philosophers accept theories (such as subjective utilitarianism) according to which it is impermissible to perform an act that involves a subjective risk of harming others even if it involves no objective risk of harming anyone. And yet these same theories do not allow that the normative uncertainty of the agent can affect the permissibility of her actions, for these theories take definite stances on all sorts of normative questions about which agents might be uncertain: questions such as whether or not it’s permissible to cause (or to take an objective risk of causing) harm so as to prevent more numerous others from causing (or to taking an objective risk of causing) similar harms.
Now, many of the philosophers who endorse H1 theories reject PO theories, because PO theories fail to be action-guiding. But these theorists seem to take only a half-step. They rightly point out that, if a normative theory is going to be action-guiding, it will need to take account of the non-normative uncertainty that deliberators often face. Yet they take only a half-step, because they neglect the fact that, if a normative theory is going to be action-guiding, it will also need to take account of the normative uncertainty that deliberators often face. So if they’re concerned about action-guidingness, they should favor PS theories, not H theories, over PO theories. Thus, I don’t understand the motivation for H theories. If you think that a moral theory must take account of an agent’s uncertainty in order to be action-guiding, then why take a half-step and think that only non-normative uncertainty (or that only normative uncertainty) is relevant? I don’t get it. Why do some philosophers find H theories attractive? Does anyone find H2 theories attractive? I don’t know of anyone who endorses an H2 theory. Yet what possible reason could there by for preferring H1 theories to H2 theories?
As a promoter of both PS and H1 theories, I would argue that the attractiveness of H1 is that there may be reasons for thinking that most of the normative uncertainty can be far more easily resolved than the non-normative uncertainty , and that this is especially true if resolving the moral uncertainty leads us to consequentialism. For under this theory, an infinite number of future non-normative facts may be relevant to our choice; we can only hope for some broad, approximate resolution of these, and must often concede that the evidence for them will vary amongst conscientious agents.
However there might be reasons to think that very few conscientious agents could be absolved of ignorance of the relevant normative facts which should lead them to adopt consequentialism, or at least to have reasons to give consequences great moral weight. If so, it is worth developing an H1 theory, for use by most conscientous agents (or the most conscientious agents!) while conceding that it is at least theoretically possible for some conscientious agents, in unusual circumstances, to be morally permitted to follow a non-consequentialist theory.
Hence I agree that PS theory is ultimately the best for action-guiding; and as a good pragmatist, I would say this also means that the true moral theory is ultimately of this form. But such a true PS theory could rather quickly give moral agents reasons to adopt an H1 theory, only recognizing the possibility of a PO theory as a kind of abstract ideal. It is our obligation to in some way approximate our behavior to what the PO theory would say, but keeping our feet firmly grounded in PS, I would deny that we are actually obligated to do what a PO theory says we should, when the relevant evidence is inaccessible to us (as it always will be, for consequentialism at least).
I was trying to think of a good analogy to this; this is a little rough but try it. One might debate the problem of what kind of bridge to build over a certain river. One might say there is structural uncertainty–what kind of bridge shape would be best to use–and load uncertainty: what vehicle weight it is likely to bear. To some extent, the structure you build limits the load: if you have a two-lane bridge, you can be pretty certain the maximum load it will have to bear is less than if you have four lanes. If you have a toll bridge, you can be pretty sure that the total number of vehicles on the bridge will be limited by the rate of passing the toll gate, limiting the load further. And so on.
Now in general we can have a double-uncertainty theory about the bridge: we ought to build the bridge that is best given the geographical constraints, and carries the load safely, and if we don’t know both of these, or either, we are correspondingly uncertain what bridge we should build, and would not be wrong perhaps in being safe/conservative in our plans (or building no bridge at all). Call this SE, for subjective engineering. Ultimately that’s the best, most rock-solid truth we can assert about how we should build the bridge. But that may not get us very far. So we do a little work and think: well, look, it’s not a very wide river, and there are high banks on either side, so a two-point cantilever is obviously the way to go. We could be wrong, but our evidence for this might seem pretty compelling. So we now have a new theory: we should build a cantilever bridge strong enough to handle the likely load. Call this HE1 (hybrid engineering type 1). Now we have a further question: what is the maximum load we need to support? This is a whole new question, and different answers to it will give you different sized bridges. We could play it ultra-safe and build a massive bridge, but that may be wasteful; or we could cut corners and end up having one that wears out dangerously soon. We gather evidence and build accordingly; but we might remain much more uncertain about these numbers than about the cantilever design. But we base such actions on HE1, even though this is strictly subordinate to and derivative from the more fundamental SE.
We could go on and talk about an objective engineering principle, OE: build that bridge will handle all actual future traffic safely at optimal cost. But this is pretty useless to us, since we can’t know the exact values of the relevant variables; at best we approximate this with the evidence we have. But the evidence about the bridge design may be very clear, the evidence about the load more uncertain. So for practical purposes developing HE1 is a good idea.
Maybe it’s important not just that (as Scott says) the normative uncertainty is resolved more easily, which I guess is controversial, but that resolving that uncertainty is the task of normative ethics. In that sense we can think of H1 theories as constituting the best advice which a normative ethicist can give to an agent. In that sense it seems very natural that H1 theories are what normative ethics typically produces.
That may not sound like an explanation of why permissibility goes along with H1 theories. But notice that the difference between PS and H1 theories isn’t really analogous to the difference between H1 and PO theories. H1 theories give more specific advice to the agent than PS theories; but PO theories do not give more specific advice to the agent. I can imagine a theory that would give more advice than a PS theory: that is one which e.g. actually told us what would maximise utility, by including a lot of non-moral information. But philosophers rightly doubt their ability to give such fully informative theories. The standard of permissibility is thus following the advice of the most informative normative theory available (letting the degree of specificity in the normative theory it makes sense to give determine the question of how objective the theory of permissibility is).
I worry that also that the application of the “why take only a half-step?” thought will lead to the empty theory. It isn’t clear that PS theories represent a stable stopping-point, in that whatever the content of such theories we can always ask: “why assume that the agent is certain of *that*?”
Thanks for the comments. I respond to Scott and Daniel below.
Scott: You write: “I agree that PS theory is ultimately the best for action-guiding; and as a good pragmatist, I would say this also means that the true moral theory is ultimately of this form. But such a true PS theory could rather quickly give moral agents reasons to adopt an H1 theory.”
On an H1 theory, the permissibility of S’s doing x is not affected by S’s uncertainty about the normative facts (and let’s assume that this uncertainty is due to S’s less than ideal epistemic position and not due to S’s failure to uncover and consider the available evidence). Now I don’t see how S’s resolving some of her normative uncertainty would ever get us to think that the permissibility of S’s doing x is not affected by whatever normative uncertainty remains in S. So how does the truth of a PS theory lead us to adopt an H1 theory? Aren’t PS theories and H1 theories logical contraries? And if so, how can you claim that some PS theory is true while also claiming that we should adopt some H1 theory? Are you claiming that we should adopt false theories?
Daniel: You talk about advice. I’m talking about permissibility. What do you take the relationship between the two to be? As I see it, the two are related as follows. It is subjectively permissible for me to advise S to refrain from doing x iff my doing so is PS-theory permissible. And it is objectively permissible for me to advise S to refrain from doing x iff my doing so is PO-theory permissible.
I have a lot of sympathy with Doug’s point. However I think there is an explanation, in two parts, of why people find H1 theories attractive.
First, the consequentialist tradition in particular, but other moral theories too, view moral conclusions as being derived from very high-level abstract principles. Consequently, a lot of what an ordinary person would call normative uncertainty is really, on this kind of view, non-normative uncertainty. For example, if you are uncertain whether lying is permissible in the sort of tight spot you’re in, the consequentialist is likely to diagnose this as uncertainty about the non-normative facts of what will maximize utility.
Second, if you do have genuine normative uncertainty, then arguably there is just no theory to be had. Here’s an analogy: at the lottery ticket counter, the OP tells me to buy the ticket with the winning number. I’m (non-normatively) uncertain what that number is. Thus I buy nothing, which is a guaranteed second-best outcome, but that’s probably as good as I’m going to do. Now consider someone confronted with, so to speak, a consequentialist lottery: they are (normatively) uncertain whether the good to be maximized is pleasure, or freedom, or friendship, or aesthetic experiences, or some hybrid of these. It’s very unclear to me what would count as a “second-best” strategy here.
So maybe the assumption has been that (a) there isn’t any “real” normative uncertainty and (b) insofar as there is, in practical terms, there is nothing to be done about it. But having said that, I haven’t read the PS literature Doug cited.
Doug, what do you think of a view like the following?
There are two sorts of ‘ought’, objective and subjective. Context settles which sort a given ‘ought’ is. The correct theory of objective ought is a PO theory. The correct theory of subjective ought is a PS theory. These theories aren’t in competition, because they’re about different things.
Your reply to Daniel suggests you might endorse something like this, though I don’t think this came through in your original post.
Campbell,
I’m attracted to such a view.
Doug,
It seems that if we assume that permissibility and blame are closely connected, there are two reasons, crudely sketched below, for adopting an H1 theory. Perhaps this helps explain the appeal?
(Appeal to intuition) Intuitions about the fairness of blame support blaming those ignorant of evaluative or moral facts, but tell against blaming those who are ignorant of non-evaluative facts.
(Utilitarian Argument) The practice of blaming those ignorant of evaluative or moral facts leads to more good than not blaming such agents. But a practice of blaming those who are ignorant of non-evaluative facts would lead to more bad than a practice of not blaming such agents.
Brad,
My intuition is that those who perform objectively impermissible acts because they are culpably ignorant of the relevant non-normative facts are just as blameworthy as those who perform objectively impermissible acts because they are culpably ignorant of the relevant normative facts. Conscientious agents will attend to the relevant non-normative facts (i.e., the non-normative facts that are normatively relevant), and non-conscientious agents who fail to do this are blameworthy for this failure. And my intuition is that those who perform objectively impermissible acts because they are non-culpably ignorant of the relevant normative facts are no more blameworthy than those who perform objectively impermissible acts because they are culpably ignorant of the relevant non-normative facts. Are you disagreeing with me? If so, could you say more to motivate your contrary intuitions.
Could you explain why I should think that the claims that you list under the “Utilitarian Argument” are plausible. And could you say whether you’re talking about culpable or non-culpable ignorance.
More importantly, does anyone think that the permissibility of S’s performing x is closely connected to the utility of an agent’s performing an act that constitutes her blaming S for performing x as opposed to the blameworthiness of S for performing x? Surely, the two can come apart. Blaming/punishing S for doing x might have great utility even if S is not at all blameworthy for performing x. But isn’t permissibility tied to blameworthiness and not to the utility of acts of blaming? And if so, what’s the relevance of the Utilitarian Argument?
By the way, the normative and non-normative uncertainty that I’m talking about in the post is due to the agent’s less-than-ideal epistemic position and not to his or her failure to seek out and consider all of the available evidence. Thus, all the uncertainty that I’m talking about is due to non-culpable ignorance.
Heath,
I think that your (a) and (b) are both false. There is genuine normative uncertainty (that just seems obviously true), and there is, in practical terms, something to be done about it. If you look at the theories that people such as Zimmerman and Sepielli offer, you’ll find some interesting theories about what is to be done about normative uncertainty and isn’t nothing. Of course, I realize that you’re just offering possible explanations for why people have been attracted to H theories. But I’m actually interested only in whether there are any good reasons for philosophers to be attracted to H theories.
Hi Doug –
Cool post. I think I totally agree.
But, combined with maybe something Brad said, could one explain the motivation for an H theory in this way? One might say that, at least for the standard consequentialist, the non-normative morally relevant facts are in principle unknowable (see, for instance, Lenman’s argument to this effect). However, one might argue, the normative facts are at least in principle knowable. So if we tie praise and/or blame to things that are roughly speaking knowable, one could be blamed for not knowing the normative facts, but not the non-normative relevant facts, given that the latter can’t be known (in at least the consequentialist case).
I think there’s a lot in what I just said that’s implausible, but that might go some distance toward explaining intuitions.
d
Doug,
First, you are right to bring in the culpability issue. That is why I said my statements were crude. I was trying to give a diagnosis of the, perhaps illusory, appeal of H1.
Once we focus on the version of the theory you specify in your response to me, I wonder what non-consequentialist philosophers you have in mind when you write this: “Many philosophers (consequentialist and non-consequentialist alike) accept theories according to which the permissibility of an act is affected by the agent’s non-normative uncertainty, but not by the agent’s normative uncertainty.”
Second, in giving my diagnosis, I was merely reporting the sorts of intuitions that might support H1, not reporting my own. But here is a line of thought that tempts me: Start with Scanlon’s idea that to claim an act is blameworthy is to indicate “something about the agent’s attitudes toward others that impairs his relations with them.” (MD, 145) Now if someone seriously mistreats me and I see that this mistreatment is no accident because it reflects his values – imagine a racist case – that impairs my relations with him; and he is therefore blameworthy. If I find out that he has the values he does because he is non-culpably ignorant of their falsity, that might matter to me, and change my attitude somewhat, but my relations with him will still be substantively impaired; he is still blameworthy.
I’m no fan of the utilitarian view I was positing in reason 2 – you provide some of the reasons why. Again, just playing devil’s advocate. I was thinking of a rule-utilitarian view that would reject talk of blameworthiness and permissibility at the fundamental level and justify the practices of using such talk by appeal to their utility.
Brad,
You ask what nonconsequentialists I have in mind. I have in mind those nonconsequentialists who hold that there is an agent-centered restriction against doing what imposes some significant subjective risk of causing harm to another. I believe this includes Thomson as well as most other nonconsequentialists. Such nonconsequentialists hold that S’s non-normative uncertainty affects the permissibility of S’s actions but deny that normative uncertainty (as regard to, say, whether it’s permissible to inflict harm on one so as to prevent more numerous others from doing the same) affect the permissibility of S’s actions. Actually, they don’t explicitly deny that normative uncertainty affects permissibility, but the account of permissibility that they give implies that it doesn’t.
On the other matter, are we imagining that the racist is justified in holding her racist values given her epistemic position? I find that hard to imagine. In any case, though, I don’t see how I can blame someone for doing X to me if she conscientiously sought out and considered all of the available evidence and on the basis of that evidence came to the justified belief that that she subjectively ought to do X to me and was motivated by that belief to do X to me. After all, wouldn’t I do the same to her had I been fully conscientious and in her epistemic position?
Doug,
Thanks. Did not know Thomson would take that line even once we bring in the culpability stuff – interesting. Is there a good text to see this?
We are imaging that the racist is not guilty of being epistemically irresponsible. If you find this inconceivable, how about picking a new example of the same sort to help pump our intuitions.
On Scanlon’s Watson influenced view we should admit that there are blameworthy people that we should not blame. Roughly, the mistreatment in this case is still attibutable to the racist and that is why it impairs my future relations with him. But, as your response indicates, that does not entail that I can rightly hold him accountable for the mistreatment, e.g. by verbally blaming him.
I am not seeing why the fact that I would do the same had I been fully conscientious and in her epistemic position undermines the claim that her mistreatment impairs my relations with her. Why not think I would have similarly impaired relations with my “counterfactual self” who would be a racist?
I’m thinking of Thomson’s The Realm of Rights, but, admittedly, it’s been a while since I’ve look at it, so I might be misremembering. What I’m remembering is that she holds a view that commits her to the claim that it is impermissible for S to do X if S’s doing X carries with it a significant subjective risk of causing harm to others and would do no good besides preventing two others from doing the same. Since this is a sufficient condition, it follows that it would be wrong for S to do X even if she is non-culpably uncertain as to whether some agent-neutral moral theory that doesn’t accommodate agent-centered restrictions is correct.
On the other matter, I wasn’t claiming that the fact that I would do the same had I been fully conscientious and in her epistemic position does (or doesn’t) undermine the claim that her “mistreatment” (isn’t this a bit loaded) impairs my relations with her. I was claiming that it undermines the claim that it would be fitting to blame her. And I take it that someone is blameworthy iff it’s fitting to blame her. And, as I see it, whether it is fitting to blame someone for her actions is a different question from whether we should perform certain actions that constitute outward expressions of blame. So, unlike you, I’m not taking it for granted that to claim an act is blameworthy is to indicate “something about the agent’s attitudes toward others that impairs his relations with them.” (MD, 145) Of course, I haven’t yet read Scanlon’s book, so I’m at a disadvantage here.
Ok. Can you say more about what you take “being a fit object of blame” involves? You were not thinking it involves being a fit object of an outward expression of blame – fair enough & sorry for assuming otherwise. But you are apparently also not thinking it involves being a fit subject of impaired relations. But what is the other option? Just trying to see the view.
You might be right to ask about the applicability of ‘mistreatment’. Perhaps the intuition in favor of H1 stands or falls with the intuition that people like the racist in question are mistreating others, despite the fact that they are not open to criticism on the epistemic front.
Interesting stuff – thanks!
Brad,
I don’t know how successful I’ll be, but I’ll try to fill things in as best I can. I think that S is blameworthy for doing X (i.e., it is fitting to blame S for doing X) iff it is fitting for S to feel guilt for having done X and fitting for others to be indignant with S for having done X. And I’m using fitting in whatever sense it is fitting for a subject to believe in proportion to her evidence and fitting for an impartial spectator to prefer a better to a worse outcome.
Some similar things have been said above, and I apologize if this is overly redundant. It seems to me that we can explain both the preference for H theories and the specific (seemingly universal) preference for H1 theories over H2 theories by acknowledging that many people (implicitly or explicitly) take the normative truth to be a priori (or, at least, in less fancy terms, to be obvious). When confronted with someone professing normative ignorance, it seems a natural reaction (at least in any case where we do not believe ourselves to be ignorant) to roll our eyes and suggest that the person is either (culpably) irrational, foolish or lazy in their normative reasoning. The same is not true of much non-normative ignorance. So, the quick answer would be that people prefer H1 because they think there is no such thing as non-culpable normative ignorance, and thus that any proper PS view will collapse into an H1 view (perhaps this is what Scott was thinking?). Anyway, I think the debate should just be over whether this gut-reaction is right—whether there is non-culpable normative ignorance. One reason to think that there is is precisely the kind of concern Sepielli raises (if I recall correctly)—even if the normative truth is a priori, we might just not have time to reason to the truth in all situations. But I think this leads to Daniel’s point early on: PS views might be best for action-guiding in the real world, but if time-constraints are the only factor that can render normative ignorance non-culpable, the (timeless) task of the normative ethicist will ultimately be a search for an H1 view.
David,
Strictly speaking, we should be talking about non-culpable normative uncertainty here. And even if normative facts are known a priori, I find it incredible to think that those who suffer from normative uncertainty must be “irrational, foolish or lazy in their normative reasoning.” The fact that I know that there are so many wise, rational, and diligent moral philosophers and normative reasoners who disagree with me about a whole host of normative issues may not defeat my moral knowledge, but it does lead to my being less than fully certain about many normative issues. And it seems that insofar as I’m a conscientious normative deliberator, this normative uncertainty will affect what I decide to do. For instance, it might be that under no candidate moral theory is my doing X wrong, but on one candidate moral theory my failing to do X is not just wrong but especially morally bad as wrong acts go. In that case, even though I may rightly believe that this moral theory is false, it may make sense for me to decide to do X and avoid the possibility of doing something morally atrocious given my lack of certainty with regard to the falsity of this moral theory.
I’m with Doug on this one. Even if some normative facts are apriori, plenty of them seem to be facts we’ll never know. Facts about the comparative stringency of conflicting moral considerations seems to be precisely the sort of thing that reasonable people can disagree about in spite of careful reflection and knowledge of all the relevant non-normative facts. Imagine a series of neighborhoods each slightly more dangerous than the next filled with parents who know of all the shady vans, dangerous dogs, broken bottles, etc… in the neighborhoods who have to decide whether to let their kids out to play or to keep them in where it’s safer. It seems rather implausible (to me) to think that rational parents will be able to reliably distinguish those neighborhoods where it is just slightly to moderately reckless to let kids out and slightly to moderately smothering and overly protective to keep the kids in.
Doug,
I had a question about this:
“My intuition is that those who perform objectively impermissible acts because they are culpably ignorant of the relevant non-normative facts are just as blameworthy as those who perform objectively impermissible acts because they are culpably ignorant of the relevant normative facts.”
I sometimes have that intuition, but then again I like saying things like “Ignorance is no excuse”. You could say this. The reason that non-culpable ignorance concerning matter of fact excuses (sometimes) is that someone who acts on mistaken factual beliefs has not shown that they are willing to act against the values that we should care about but someone who acts on non-culpably ignorant normative beliefs does show that they are willing to act against values that we ought to care about. That the judgment is itself non-culpable just shows that they have not shown themselves to be willing to shirk epistemic responsibility. I don’t know if this is my view because I’m not a mind-reader, but I’m not certain that there’s not something in the neighborhood of this that’s the right view. (I’m almost certain that I’ve read a defense of this view somewhere but I can’t recall a source. Duff?)
Doug,
Yes, sorry, I should have said uncertainty rather than ignorance, though I think I want to say all the same things with respect to uncertainty.
You write:
Let us assume that we have agreed that normative facts are a priori. This means, it would seem, that in cases of (purely) normative disagreement, the only explanation for that disagreement is that one (or both) of the disagreeing parties have made some sort of cognitive error. You (rightly) recognize that when confronted with a dissenting intellectual peer, the error is as likely yours as theirs. You thus conclude in normative uncertainty. The question is, I take it, whether you are “culpable” for this uncertainty. Assuming that you do not suffer from some cogntivive
deficiency, doesn’t it seem that, given enough time, you should be able to resolve the matter? After all, the truth (being a priori) seems available to anyone who considers the matter carefully enough. So, again, insofar as we take others not to be cognitively deficient, it seems that our attitude should be that, given proper time and effort, they will reach the normative truth, just as we will. And thus when we believe ourselves to have reached that truth (we are not uncertain), and it seems that others have had ample time to do the same, we see their uncertainty or ignorance as culpable. In cases where we remain uncertain, it would also seem reasonable for us to think that this is merely because we have not had the proper time to think things through, and thus that insofar as we are doing normative ethics (rather than trying to make a decision about how to act next) we won’t bother stopping for a PS theory; we’ll head straight on through to find the right H1.
Clayton,
Your example has some force, but it seems too general. This is a problem we face all the time with many non-normative issues (baldness, etc.). The fact that in those special cases of vagueness we will never reach a definite answer doesn’t seem to speak against a general attempt to discover the correct normative theory.
David,
You talk about our “reaching the truth” and that, once we have, we will be certain and, thus, view the uncertainty of others as culpable. But can’t I know that p and not be certain that p? Knowledge doesn’t require certainty, right? You seem to be assuming that because normative knowledge is to be had a priori, one will be certain about one’s normative knowledge irrespective of the presence of dissenting intellectual peers. Why should I accept this assumption? Or am I mistaken in thinking that your claims rest of this assumption?
Clayton,
You talk about the “values that we ought to care about.” I’m not clear on what you mean by that phrase. Suppose that V1 is in fact better than V2 but that my epistemic condition is such that I ought to believe that V2 is better than V1. On your view, ought I care more about V1 or V2?
Here are some more explanations of the attraction of H1 theories. (1) The task of moral philosophy is to tell us what ends we should have. (2) The task of the theory of rationality is to tell us how our ends ought to be pursued. Consequentialists might accept (1) and (2), thinking that the best answer to (1) is the utilitarian theory of value and that the best answer to (2) is Bayesian decision theory. If you put these together, you get an H1 theory. Solving (1) and (2) is part of one’s task as a philosopher; discovering all the facts about human psychology/climates/health care/etc. that might be relevant to decision making is not a task that philosophers could expect to do better than others. True, there are applied ethicists who have other things to say. But that theoretically inclined philosophers only go this far is part of a reasonable division of labor.
A second rationale. We want to say what the very best conclusion about what to do would be if you had to reach it from the armchair. The very best answer would incorporate (1) and (2), and it would be that you should maximize expected utility. Someone already said this, so I won’t say much about it.
(A wrinkle. We might wonder about whether there are further epistemic constraints that you can figure out from the armchair or would be part of what philosophers could do well to contribute. If there are, we may wish to add them in. The answer might then take the form of maximize expected utility on a certain probability function given your total evidence.)
Finally, PS theories, if they’re any good, might be unstable as well. A PS theory that doesn’t just tell you to do what you ultimately think you should do is probably going to have a Bayesian element. But if it has one, it might be too epistemically demanding due to familiar issues of logical omniscience and such. And this kind of epistemic demandingness might be objectionable for the very reason that it is objectionable that agents should have to act in accordance with the correct moral theory. So there might not be a good PS theory to turn to. (I say this being unfamiliar with the work of Zimmerman, Sepielli, and others. Perhaps they have solutions to this that show worry to be unfounded.)
Even if your answer has no Bayesian element, I suspect that if it has any substantive advice to give at all, there will be situations in which it is too difficult to apply in practice. This is a familiar issue raised by the decision procedure/criterion of rightness distinction. At any rate, my suspicion is that there will be a trade-off between substantiveness and ease of application. H1-type answers do not strike me as a bad way to make the trade-off, especially if we remember the difference between criteria of rightness and decision procedures.
Hey Doug,
“You talk about the “values that we ought to care about.” I’m not clear on what you mean by that phrase. Suppose that V1 is in fact better than V2 but that my epistemic condition is such that I ought to believe that V2 is better than V1. On your view, ought I care more about V1 or V2?”
If V1 is better than V2, we ought to care more about V1 than V2. (e.g., if V1 is actually good and V2 is no good at all, we shouldn’t care about V2 at all). If your epistemic condition is such that you shouldn’t believe this, then you shouldn’t believe that you should care for V1 than V2 but you should care more. I don’t think I need to know what someone’s epistemic condition is like to say that they ought to care about animal suffering and if they act in ways that causes animals to suffer (knowing that they are doing this) when they have nothing that I’d say counts as a reason to do it that overrides, I’d say that their normative ignorance (even if non-culpable) does not excuse their actions. Someone who doesn’t know that animal suffering is bad and sets up dog fights engages in actions that are, I think, not excusable. Compare their deeds to a vet who does surgery without using the right amount of anaesthetic where the dog suffers just as much as it would have suffered had the dog been in a dog fight. This kind of non-normative ignorance (i.e., not knowing that the anaesthetic isn’t working) seems a much better excuse than the normative ignorance (i.e, not knowing that animal suffering is bad). I don’t think that this is just because it is easier to imagine how someone could have the one sort of ignorance non-culpably.
It’s possible to take Sartre as holding to a theory like H2. There are no objective normative facts for him, so it’s all subjective on that front. But he insists that we can criticize someone’s moral judgments for getting the non-normative facts wrong. For example, the racist who thinks black people are inferior to white people has got the facts wrong and can thus be criticized for making an error. I’m not sure if that’s what you meant by H2, but it does seem to me that you could classify it that way.
Just noticed this post. I’m in a rush right now, and so I’ll say more about some of this stuff later, but I thought I’d at least two things on the table now.
1) My current view is not that purely subjective theories should serve as action guides in the sense that such a theory should be a premise in one’s practical deliberation, although I admit that I bungle this in “What to Do…”. Rather, the theories that should play this sort of action guiding role are either objective theories, or what I call “epistemic probability-relative” theories, where statements about epistemic probability are those used to express one’s credential state, and their semantics is determined, in part, expressivistically, by the credential state they’re used to express. (I think Seth Yalcin and maybe some others have well worked out expressivist semantic theories.) The basic ideas: “There’s a .5 EP that murder is wrong” stands to one’s credence of .5 that murder is wrong just as “Murder is wrong” stands to one’s full belief that murder is wrong, and the semantic value of the former is determined in part by the state of uncertainty its used to express. Anyway, I’ve got a paper on all of this that’s still in its early stages.
2) Someone brought up the problem of normative uncertainty regarding theories of what to do under normative uncertainty. (And, of course, further and further iterations are possible.) Really fun problem! There are really two problems lurking here: First, you might think it’s impossible to guide one’s actions by norms if one is normatively uncertain “all the way up”. Second, action-guidance aside, it’s not obvious to say about cases in which doing A is locally rational relative to such-and-such mental states, but one has a credence of less-than-1 that A is indeed locally rational relative to those states. What’s it rational to do relative to the original mental states, plus that extra credence?
I have a line on the first, action-guidance-related problem that involves some of what I said in point 1 above. Re: the second problem — I’ve got what I think is an answer to that, too, but it’s a little complicated to get into right now. At any rate, you see the second problem cropping up in all sorts of interesting places. Robert Nozick discusses uncertainty among evidential and causal decision theories in his The Nature of Rationality. In the peer disagreement literature, Brian Weatherson has this paper where he argues that the “Equal Weight View” is in some sense incoherent when one’s peers disagree regarding that very view (which I think is best interpreted as a view about rationality). Jake Ross and I both discuss the problem in our dissertations. It also comes up in the original debates about moral uncertainty, conducted between Catholic moral theologians. They were trying to develop what they called “reflex principles”, which were, putting aside certain complications, roughly what Doug is calling “purely subjective” theories.
Okay, more (and more developed) stuff later…
I like the points Nick Beckstead raised. (Though we might want to treat normative false beliefs differently from other a priori – e.g. mathematical – false beliefs. I sketch a tentative proposal in my post, ‘Rules for Normative Risk‘.)
Another way to make the point is that we may be interested in what perfectly virtuous agents would do. A perfectly virtuous agent has all the right motivations, though there’s no guarantee that they’ll have all true beliefs (if the world serves up misleading or incomplete evidence). This is reflected in H1 theories.
A certain kind of ‘quality of will’ theorist will see this as relevant to considerations of blameworthiness, also. Someone (say a young child) who isn’t in a position to realize that its wrong to torment animals may not be irrational when they act in this cruel way. But they are still cruel. And we may think that negative reactive attitudes are warranted towards vicious people, even if they aren’t epistemically irrational in failing to appreciate their own viciousness.
Doug,
Fair enough, let me grant that the threshold for normative knowledge may be lower than the threshold for normative certainty. Now, of course, for real people true certainty may never be possible. But certainty is, nevertheless, an epistemic ideal, is it not? Assuming you believe it is, would you agree with the following?
Premise: If X is knowable a priori then, to the extent that certainty about X is possible, this level of certainty is justified by a process of pure reasoning.
If you grant this, then I think my point holds. When we are doing normative ethics (rather than trying to decide what to do in limited time) we see that through pure reasoning, we will, ideally, reach the correct normative theory—an H1 theory.
One concession: What if we reason and reason and reason, can find no fault in ourselves or our opponents, yet continue to disagree? In that case, at the limit, we might have to admit that we and our opponents suffer some cognitive deficiency, and at that point we would have to rest on a PS theory. (Some might say, instead, that perfect reasoners can disagree on the normative facts, though those facts are a priori. This claim has never made any sense to me.)
I agree both with David Faraci’s first post, and Daniel Elstein’s message above. Doug, I’m a little puzzled still about the strong antagonism you’re suppose must exist between PS and H1 theories. One might say the same thing about Einsteinian and Newtonian physics: they can’t both be true. Well, yes, in the sense that they can’t both be ultimate, complete and descriptions of the universe. But if (say) Einsteinian physics is this (it probably isn’t, of course), then this gives us excellent reasons for adopting Newtonian physics for 99% of all applications, reserving the ultimate theory for unusual cases. So when you ask if I am claiming we “should adopt false theory”, I say of course, if “adopt” means “use in practice in relevant circumstances, which may be most of the time.” If “adopt” meant “act as if it was the ultimate, foundational truth,” then no. But we don’t need to think of H1 in the second sense to be “interested in developing” such views; the first sense of “adopt” is quite sufficient to motivate developing such theories.
Another quick comment:
Here are two distinctions that have been alleged to be significant, and that it’s important to see cut across the normative/non-normative distinction:
First, the a priori/a posteriori distinction. As Doug has mentioned, there are plenty of a priori truths that are non-normative. Furthermore, it strikes me as strange to dismiss as unimportant theories that take as inputs agent’s credences distributed over propositions concerning an a priori domain. How should we treat criminals given our uncertainty regarding free will skepticism? and How should we treat animals given our uncertainty regarding whether they have intentional states? seem like they’re in perfectly good order.
(And for what it’s worth, this seems like an odd line to draw for present purposes. Truths ascertainable through introspection and the kind of access to what one is doing that Anscombe talks about aren’t a priori, but they’re much more readily available in what seems to be the relevant sense of “available” than many a priori truths are.)
Second, (what we can read as) Richard’s distinction between truths such that a completely virtuous agent would be certain of them, and other truths. It is plausible that one can only be a totally virtuous agent if one is certain of basic moral truths, or paradigm cases of rightness or wrongness, or what have you. But it sort of strains credulity to think that, if the right theory of distributive justice is prioritarianism with such-and-such a weighting function, that one is less than fully virtuous for having some credence in prioritarianism with a slightly different weighting function. If this is wrong, then I suspect Richard’s criteria for virtuousness are so cognitively stringent that, no, I don’t really care about what the perfectly virtuous agent, as such, would do.
Well, thanks to everyone for the interesting comments. I don’t have time now to respond to them individually. More importantly, I need to mull them over for awhile. I hope to do so over the next day or so and then respond them. In the meantime, I thought that I would try to spell out my worry about H1 theories (such as subjective utilitarianism) in a slightly different way.
Imagine that S’s beliefs and credence levels are in exact proportion to S’s available evidence. Assume that S’s available evidence about both the normative and the non-normative facts is misleading. Assume, for the sake of simplicity, that S has only three options: X, Y, and Z. Assume that S’s evidence about the relevant normative facts is such that he is warranted in giving very high relative credence to the thought that he objectively ought to do the act that has feature F1, and it’s clear that the only act with feature F1 that’s available to S is Z. Nevertheless, let’s assume that, in fact, S objectively ought to do the act that has feature F2: i.e., the feature of maximizing utility. Let’s further suppose that S’s evidence about the relevant non-normative facts is such that S is warranted in giving very high relative credence to the thought that the act with the highest expected utility is Y. In fact, though, the act that would maximize utility is X.
Now I want to say that, objectively speaking, S ought to do X. And I want to say that, subjectively speaking, S ought to do Z. Subjective utilitarianism, by contrast, says that S ought to do Y. But in what sense of ‘ought’ could subjective utilitarianism be correct in implying that S ought to do Y? It seems that it can’t be in the objective sense — the sense in which that’s what a good and conscientious agent with full knowledge would do. And it seems that it can’t be in the subjective sense — the sense that’s tied to what a conscientious deliberator in S’s epistemic position would decide to do. So in what sense of ‘ought’ might we think that subjective utilitarianism is making a true claim about what S ought to do?
Does anyone deny that such a scenario is possible? Does anyone deny that X is what S objectively ought to do and that Z is what S subjectively ought to do? Am I missing some interesting third sense of ‘ought’ where the sentence ‘S ought to do Y’ comes out true?
Consistent w/ my last post, I would say that in the situation you just described, S ought to do Z. Subjective utilitarianism, by saying S should do Y, is strictly speaking incorrect. But it is correct for a large number of other cases, perhaps most, and hence should be developed by conscientious moral philosophers, and its practical adoption should be urged about all agents (along with the relevant evidence supporting the validity of its basic norms).
I meant *for* all agents, not *about* all agents, in my last post. I apologize for this and other spelling errors recently; I am accumulating evidence that I should preview posts and edit them instead of writing them too-quickly between sessions at a conference. This would produce better consequences, which I have long ago decided is what I ought to produce. 🙂
Doug,
Can you clarify what you mean by “available evidence?” Is available evidence what S takes to be evidence or what is, in fact, evidence? If the former, then your stipulation still allows for culpability (I think that’s what it was meant to avoid). If the latter, then I don’t see how this is possible. How can objective evidence (in matters a priori) be misleading?
Below are some individual replies:
Nick,
You write: “Here are some more explanations of the attraction of H1 theories. (1) The task of moral philosophy is to tell us what ends we should have. (2) The task of the theory of rationality is to tell us how our ends ought to be pursued. Consequentialists might accept (1) and (2), thinking that the best answer to (1) is the utilitarian theory of value and that the best answer to (2) is Bayesian decision theory. If you put these together, you get an H1 theory.”
I don’t accept either (1) or (2), but even if I did and even if I believed that “the best answer to (1) is the utilitarian theory of value and that the best answer to (2) is Bayesian decision theory,” I don’t see how this get us to an H1 theory? What do you see an H1 theory being a theory about? I don’t think that it’s a theory about what S objectively ought to do, because what an agent objectively ought to do doesn’t depend on S’s non-normative uncertainty. And I don’t think that it’s a theory about what S subjectively ought to do, because what an agent subjectively ought to do would seem to depend on what she ought to believe are the best answers to (1) and (2), not on what you or I ought to believe are the best answers to (1) and (2).
Clayton,
Do you think that S’s doing Z in my latest example is inexcusable? Perhaps, we just have different intuitions.
Richard,
You write: “Another way to make the point is that we may be interested in what perfectly virtuous agents would do. A perfectly virtuous agent has all the right motivations, though there’s no guarantee that they’ll have all true beliefs (if the world serves up misleading or incomplete evidence). This is reflected in H1 theories.”
Can’t someone with all the right motivations have non-culpable normative uncertainty, as where the world serves up misleading or incomplete evidence about the normative facts? And if so, why do you say that this is reflected in H1 theories as opposed to PS theories? I think of PS theories as asking what a perfectly (epistemically and practically) virtuous agent would do if she faced S’s choice of alternatives and was in S’s epistemic position.
David,
You ask if I grant the following premise: “If X is knowable a priori then, to the extent that certainty about X is possible, this level of certainty is justified by a process of pure reasoning.”
I don’t. Let’s suppose that X is some mathematically derived proposition. Suppose that S1 and S2 have independently derived X via the exact same long and complicated proof. Suppose, though, that S1 and S2 differ in the following two ways. First, S1 has in the past constructed many bad proofs. He often makes errors that lead to his deriving false conclusions, especially when the proofs are long and complicated. S2, by contrast, has in the past constructed many long and complicated proofs and they’ve never contained any errors. Second, S1 knows that his friend has also been working on the same problem and has come to a different conclusion. This friend is an intellectual peer. S2 knows that his friend has also been working on the same problem and has come to the same conclusion. This friend is an intellectual peer. So X is knowable a priori but S1 and S2 are justified in having different levels of confidence in X due to their differing a posteriori evidence.
Regarding the availability of evidence, I think that different agents will have different evidence available to them with regard to the a priori both because of the different sorts of a posteriori considerations that might be available (see above) and because different agents might have different evidence available to them given their differing abilities with regard to a priori reasoning. Some might be able to construct proofs that others are unable to construct. Also, different evidence might be available to different agents given different time constraints. So the evidence that is available to S is all and only the evidence that would be available S were S perfectly epistemically and practically virtuous. And different perfectly virtuous agents will have different evidence available to them given differences in their abilities, time, and external circumstances.
Scott,
I’m interested in whether subjective utilitarianism is a true theory of anything interesting. You seem to concede that it is neither the true theory about what we objectively ought to do nor the true theory about what we subjectively ought to do. And there doesn’t seem to be any third thing that it is a true theory of. So it’s a false theory. Of course, it may be true that its deontic verdicts are correct in many instances. But although its verdicts are correct, it wrong as to why they’re correct. As I see it, subjective utilitarianism says that an act is permissible just when, and because, it maximizes expected utility.
It may also be true that we should employ subjective utilitarianism as a decision procedure. I doubt it, but that’s not my topic here. My topic here is whether it’s a true theory of anything interesting.
Doug, we’re talking about permissibility right? not blameworthiness?
Wrt permissibility, a normative theory certainly is about specifying what the normative knowledge is right? i.e. a person who exhibits non-culpable normative ignorance may not be blameworthy when he performs an impermissible act.
That is not to say that it is not logically possible. There is logical space for a normative theory which says that you ought to do what your idealised self sincerely thinks you ought to do. With idealised self being the very weak requirement of taking reasonable measures to keep oneself adequately informed about normative and non-normative facts. This would neatly exclude people who are culpably misinformed, but is basically PS. But, it must take further spelling out of what the normative facts are, and there is the issue of self referential-ness of the theory. What are the normative facts which a person should take due care to find out? Those facts which are suggested by PS. Is a person culpable if he is no sure whether Kantianism is wrong or right? or act utilitarianism? That is already the debate about which philosophers are banging their heads at each other about. Saying that taking adequate measures to identify the moral facts includes deciding on what the correct moral theory is too stringent a requirement.
i.e. PS seem unconvincing as a theory about permissibility. Either PO or H1 would in fact be better. Also, PS seems to imply relativism, which should give us reason to reject it. Doing normative theory often involves the assumption that relativism is false.
And as you say PS may not say anything useful about things more interesting than what intuitions we have in common. Killing-bad, torture bad, cooperation good etc etc.
I suppose the only serious candidate for which PS in any way applies would be Rossian pluralism. There could be genuine uncertainty about the stringency of any particular duty. Often this is one of the criticisms offered against the theory, that deciding which duty is more stringent is too subjective.
Doug,
Nice post. I am tempted to agree. But I don’t know yet well enough what it would be to take into account a person’s normative uncertainty. Presumably that would involve taking a stand about what is a reasonable normative view, given one’s info, and what is not. And if we take the view that if we have smart collegues who support a view of morality, it must count as reasonable, then there will be a very very wide range of reasonable views. And then if we say that it is morally ok for a person to act in any of the ways that would be licenced by a reasonable take on morality, then it might be surprisingly hard to act in ways that are not morally ok.
I take this to be a problem for all of us, since I really do think your case prima facie tempting. But before I sign on I would need to understand better what it would look like to take into account reasonable normative uncertainty.
Hey Doug,
You don’t say it explicitly (perhaps because it doesn’t need to be said) but are you assuming that:
(SOE) If S ought-subjectively to do A, S’s A-ing is not an instance of inexcusable wrongdoing.
To make this somewhat concrete, suppose that F2 has to do with maximizing utility and F1 has to do with something with religious obligations. (I’m imagining that this subject’s normative evidence consists of intuitions concerning principles of varying levels of generality and she sees something good in beneficence and something good in doing what the gods told us to do (e.g., be chaste, be charitable, contribute significant sums to the construction of gaudy temples, etc…) If it’s a case where someone, say, contributes significant portions of their saving to the construction of gaudy temples when that money could have been better used to benefit others with real needs, I get the idea that given the subject’s perspective this was a reasonable thing to do but I don’t think it’s obvious that this conduct is excused. Not if the agent engages in the conduct because of things that (objectively) she oughtn’t care about and that leads her to fail to respond properly (read in an objective sense) to things she should (objectively) care about. Can I just reject (SOE) and agree with you (?) in saying that this subject did (subjectively) what she should have done if she does Z but does what she objectively should have done only if she maximized utility?
Doug,
Fair enough. Let me adjust:
Premise: If X is knowable a priori then, to the extent that certainty about X is possible, this level of certainty can be justified by a process of pure reasoning.
Of course it is true that certainty about a priori matters (as with knowledge about such matters) can be gained or lost through a posteriori evidence. But the fact remains, as I suggested, that we expect that given enough time (and to the extent that neither is cognitively deficient) both S1 and S2 could independently reach the correct conclusion regarding X. Similarly, again, we expect that normative theorists (given enough time/no cognitive deficiencies) could construct a viable theory, which (given non-culpable non-normative uncertainty, which will likely always be with us) would be an H1 theory.
Along these lines, let me throw my hat into the ring concerning the “what’s the point of H theories?” question. It is arguable, I think, that it is not merely the case that we want an action-guiding normative theory, but that the true normative theory must be action-guiding. Say it is true that action A would maximize utility. Say further that we have no way of coming to know this. I tend to think it is impossible that it could turn out that the fact that A would maximize utility is a normative reason to A, because reasons must be capable of guiding action. So objective utilitarianism is false (of course, it might be coextensive with a true theory—say subjective utilitarianism—given perfect knowledge of the future). So, say you agreed with my points above and thought that, given enough time, we could always reach the normative truth? And, further, you thought that while people must be epistemically responsible with respect to the non-normative truth, there will always be gaps in what they do and can know non-normatively. Wouldn’t you then go looking for an H theory, not because it would be the best you could do, but because it would be the truth?
Some more responses…
Murali,
I’m afraid that I wasn’t clear on everything that you said, so I’m not sure whether the following adequately addresses all of your points.
We’re talking about both objective and subjective permissibility. And I think that although S isn’t necessarily blameworthy for doing what it is objectively impermissible for her to do, S is blameworthy if S does what it is subjectively impermissible for her to do. Yes, subjective permissibility is subjective, but I don’t see why that’s problematic.
David S.,
I don’t think that it’s subjectively permissible for me to perform any act that would be licensed by at least one moral theory that it would be possible for some reasonable person, with different evidence, to accept. What it is subjectively permissible for me to do is a function of what credence levels it’s reasonable for me to assign to various normative possibilities, not a function of what credence levels it’s reasonable for some other person, with different evidence, to assign. So although it might be reasonable for Peter Singer to accept act-utilitarianism with a high degree of credence, I don’t think that it’s reasonable for me to do so. Of course, given that Singer is, I believe, a reasonable person, the fact that he disagrees with me provides me with some reason to be uncertain as regard to my own normative beliefs. How this affects what I should do is complicated. The people who I cite above know much more about this than I do. But it seems that I would have to look at what the normative possibilities are, what levels of credence should be assigned to each of these possibilities, which acts would be permissible/impermissible on each of the possibilities, how morally good/bad each permissible/impermissible act would be on each of the possibilities, etc.
In any case, I don’t think that it will be difficult for me to do wrong. There are many acts (e.g., murder for personal gain, lying for personal gain, stealing for personal gain, etc.) that are wrong on any moral theory that it would be reasonable for me to give any credence to.
Clayton,
You ask, “Can I just reject (SOE) and agree with you (?) in saying that this subject did (subjectively) what she should have done if she does Z but does what she objectively should have done only if she maximized utility?”
Yes. It seems that we may disagree about only the plausibility of SOE.
David F.,
Suppose that I accept that new premise. What, then, is the relevance of the fact that S could reach as much certainty as is possible given no limitations in time and no cognitive deficiencies? I take it that if S stands for any real agent, S will have limitations in time and some cognitive deficiencies. And if we assume that S has unlimited time and no cognitive deficiencies and has reached as much certainty as is possible for her to reach, then PS will still get the right answer with regard to what she should do. PS theories and H1 theories diverge only with respect to what agents with normative uncertainty ought to do.
By the way, I don’t think that a theory about what S objectively ought to do needs to be action-guiding. Why would it?
You write,
No, because there will always be gaps in what real agents do and can know about the normative facts given the realities of their situation.
Doug,
The first answer involving (1) and (2) did not attempt to specify a use of “ought” corresponding to an H1 theory. It was aimed at specifying a reason that theoretically inclined philosophers would have for giving certain advice. They could specify a theory of value and a rational response to value. And they could say how to respond if you knew more. Given the skills of a theoretically inclined philosopher, this strikes me as a reasonable division of labor between the factual and the normative. On this proposal, we give people an H1 theory and let them fill in the other factual details as they see fit. It isn’t our job to fill in the factual details. If, for some reason, it was especially easy to fill in the details or we had all of the other factual answers, then we wouldn’t have any reason to just say, “maximize expected aggregate utility.” (I don’t know if I accept this story, but I confess it’s the best thing that I can think of right now.)
I take this to be the same reason that an economist advocating a particular model or risk management under uncertainty doesn’t say anything about how to manage risk if your uncertain about the acceptability of his model.
Now, some comments that you’ve made suggest to me that you think that subjective rightness should play a certain role with respect to judgments about blame. You aren’t going to get that from the story I’m advocating. (But, I don’t think that that would be an especially good story about blame for a utilitarian to accept anyway.)
Doug, I think as I described it, SU could be the true theory of something very interesting: what most normal people could do. Now, show me someone who’s been raised in unusual circumstances, with distorted normative evidence, and I might concede that such a person should not follow SU, and acts maximizing expected utility would not always be right for the person to do, because I subscribe to a deeper PS-type theory which absolves him.
What would you call a theory that says that the vast majority of normal people should maximize expected utility (and the rest would also have been so obligated, had they not suffered from severely defective normative evidence)? If not SU, we need a new name; and we have been talking at cross-purposes.
Doug,
I apologize for the length of this post. Given that our disagreement appears to be informed by disagreement regarding other matters, I thought it best to state my position and see where we (dis)agree.
First, I think that we agree on much of what you said in your most recent response to me: There is certainly such a thing as non-culpable normative uncertainty, even if it is only ever due to time constraints or cognitive deficiencies. Given this, I think we agree that there is need for a good PS theory.
Second, I think it makes sense that normative ethicists have and do concern themselves more with the normative truth than with constructing a decision-procedure for weighing one’s normative and non-normative beliefs. After all, the construction of a good PS theory is not necessarily work for the normative ethicist at all; it might better be relegated to, say, a decision theorist. Normative ethicists are, on the other hand (we hope) best equipped to discover the correct objective theory. So that’s my first point: The explanation for why many ethicists have preferred H1 theories over PS theories is that they see their task as telling us what is valuable (or whatever), not telling us what to do with our value-beliefs.
Now we want to know about the nature of our theory of normative truth. So, why do so many ethicists prefer H1 theories to PO theories (or PS theories, since I take it there is logical space for arguing that a good PS theory really is the normative truth, leaving us with some sort of relativism, as I believe Murali suggested)? I offer three claims:
Action-Guiding: Whatever the true normative theory is, it must be capable of guiding action, at least in principle. But theories that require things (like, perhaps, perfect knowledge of the future) that are unattainable cannot be capable of guiding action, and therefore cannot be true.
A Priori: To the extent that it is possible to be certain about the normative truth, this certainty can be gained through pure reasoning.
Accessible: Perfect cogitators with unlimited time would converge on the normative truth.
It seems to me that, taken together, Action-Guiding, A Priori and Accessible offer good reason to think that the correct objective theory will be an H1 theory. Action-Guiding makes it unlikely or impossible that it will be a PO theory; A Priori makes it unlikely that it will be a PS theory, assuming there will not always be normative uncertainty. While Accessible does not rule out the possibility that there will always be normative uncertainty, it gives us hope that working towards an H1 theory (rather than merely a PS theory) will not be in vain.
You appear to have rejected Action-Guiding and Accessible. I take it that this leads you (as it should, I think) to want us to have PO theory of the normative truth and a PS theory for guiding action.
I suspect we would not convince one another without a much broader debate over Action-Guiding and Accessible. But does my position make sense? Part of the point of this thread, I take it, is to make sense of why someone would want an H1 theory in the first place. Have I accomplished this much or do you think that even someone who accepts my three claims should not want an H1 theory?
Doug, let me put it this way. PS is not a very convincing theory even in terms of subjective permissibility because permissibility is the output of a moral theory acting on the non-normative data. Questions of whether X-ing is permissable or not have the implicit clause asking whether or not it is permissable under this or that theory. However, theories which purport to be action guiding actually spell out what the normative data is. Very few theories actually declare that there is normative uncertainty. Ross’s theory is one of them, but ross’s theory is not action guiding. It only purports to explain the features of our moral thinking. So, it may not be a useful move we want to make in order to generate action guiding-ness. Therefore H1 has to be preferable.
As a rough analogy, ignorance of the law does not make one less culpable, but non-culpable lack of non normative facts (i.e. accidents) can lessen the punishments.
I’m afraid that I have some other more pressing projects to attend to, and so I won’t be able to continue this discussion much longer nor will I be able to respond as carefully as I would like to to the recent wave of interesting comments. But, for what they’re worth, here are some rather quick and incomplete replies…
Nick,
My concern is with what H1 theories are theories about. Clearly, they’re theories about permissibility. But what kind of permissibility? Objective permissibility, subjective permissibility, or some other kind of permissibility? It seems that they must be theories about some other kind of permissibility. But what kind exactly and why is this sort of permissibility interesting? You seem to be concerned with an entirely different question: whether philosophers should advise others to adopt H1 theories.
Scott,
As I understand it, subjective utilitarianism is committed to the following:
SU: For all subjects S and for all acts x, S’s doing x is permissible just when, and because, S’s doing x would maximize expected utility.
Much of what you’ve said suggests that you think that SU is false. Whether some set of people should adopt SU as a decision procedure is a separate question, one that I’m not concerned with here.
David,
I don’t see PS theories as decision procedures. They offer criteria for subjective permissibility. You seem to be assuming that normative ethicists must be concerned with determining only the correct criteria for objective permissibility, and not subjective permissibility, and that the correct account of objective permissibility is the correct account of “the normative truth.” I think, however, that there are normative truths both about what it is subjectively permissible for agents to do and about what it is objectively permissible for agents to do.
I think that we disagree about a number of fundamental issues and that we won’t be able to resolve our disagreement in this forum.
Murali,
You write: “PS is not a very convincing theory even in terms of subjective permissibility because permissibility is the output of a moral theory acting on the non-normative data.”
I disagree. I don’t see why the subjective permissibility of an agent’s actions could not depend on what credences she ought to assign to various conflicting normative propositions given her available evidence.
Doug,
I thought that PS theories, like the one Sepielli offers, are decision procedures for “what to do when you don’t know what to do.” That is, there is a fact of the matter about “what to do,” but since you don’t know that fact, you need some way of guiding your actions, hence the PS theory. I’m not sure I understand what it means to say that this is not “merely” a decision procedure, but “subjective normative truth.” Do you just mean that PS theories aren’t merely recommendations, but that the correct PS theory is, objectively, what agents ought to follow when faced with normative uncertainty? If it’s that, then I would have just said that the PS theory is a decision theory but that our full objective theory includes the fact that one ought to follow the PS theory when uncertain. I’m not sure there’s any substantive difference between that way of speaking and saying that the PS theory is the subjective normative truth. But perhaps you mean something else…
David,
You write:
Where are these quotes coming from? Where did I use the words “subjective normative truth” or say that PS theories are “not ‘merely’ a decision procedure”? I said that PS theories are NOT decision procedures. A PS theory is a criterion of rightness; it’s just that it gives us a criterion for subjective, as opposed to objective, rightness.
Stand up and shout it!
Nothing!
Doug,
Sorry! Those quotation marks were meant to be scare quotes, not quoting quotes.
The first was just a poor attempt at shorthand on my part; I didn’t mean anything by “subjective normative truth” other than the truth about (as you put it) subjective rightness. I realize that it probably read as though I was suggesting you meant something else.
As to the other point: Perhaps I’m biased by my (perhaps poor memory of my) readings of Sepielli and Ross, but their PS theories look very much like decision procedures to me; they tell me how, given uncertainty, to properly weigh my normative beliefs, levels of certainty, etc. to come to the subjectively best action. So I guess I was assuming that, whatever else they are, PS theories minimally offer decision procedures.
I’m finding that it’s hard to discuss this further on such an abstract level, a problem I often run into when reading about “meta-ethical” issues. In particular, I’m curious about the following:
1) What is an example of a normative fact? What characterizes such facts?
2) What counts as evidence for or against a normative fact?
3) Are normative facts, facts about our objective obligations? Or about our subjective obligations? Or either?
Without clear answers to these questions, I’m unclear about the original definitions of various types of theories, and fear that I have simply been assuming that there are clear referents to these terms, and that I (& other discussants) have been using such concepts, and others defined via them, consistently; but right now I am not at all sure that this is true.
I’ve thought a little further about why I’m seeking a clearer definition of “normative fact.” The problem is that I believe that subjective moral facts are fundamental, and other kinds of moral facts are derivative from these.
Now, Doug announced in his initial post that he simply believes that there are objective moral duties describable by PO, so he believes there are objective moral facts. I do too, in a sense, but would say that objective moral facts simply describe what our subjective duties would be in the counter-factual situation where we knew all the relevant normative and non-normative facts.
But then, why can’t an H-theory supporter just postulate the existence of hybrid moral duties? A hybrid moral duty is the one you would have subjectively if you had access to the full normative evidence, but not the full non-normative evidence. H1 could then be a true theory about such facts, and in turn very interesting. Now one could reject the idea that such facts exist; but this would require an argument. If the argument is that only subjective duty satisfies the condition that the right-making features of our actions are accessible to us in a way that can guide our actions, giving us an opportunity to respond to them, then PO theories face the same problem and should be rejected likewise. So in this sense–which I admit I didn’t state clearly earlier, so this discussion has been helpful–PS and H1 theories can both be true, with different senses of “permissible” etc. in each. I am tempted to believe that only subjective permissibility is ultimate or “real permissibility”, but then that would lead me to treat both PO and H1 theories as kinds of idealizations, and disagree with Doug that H1 theories suffer from some defect that PO theories escape.
Here’s a related problem: it is indeed common to distinguish between objective and subjective obligation, facts, etc., and such appeals have been made throughout this discussion. But Doug’s initial message distinguished the theories on how they defined “permissibility.” He didn’t say “subjective permissibility” or “objective permissibility.” If he means either, then this is muddled; for surely PO theories do not say that what is subjectively permissible depends upon objective facts, nor do PS theories say that what is objectively permissible depends upon subjective facts. Nor can he mean just one or the other throughout the post. Perhaps he means objectively permissible when talking about PO theories, and subjectively permissible when talking about PS theories. But if such equivocation is acceptable, then again I see no reason why we can’t find it useful to introduce a third concept and talk about hybrid-permissibility for H theories.
Looking over the discussion quickly, I agree with pretty much everything Doug has said. One quick thing:
David Faraci asked whether the theory that (as I would put it) it’s most rational, under normative uncertainty, to do the action with the highest expected value is a decision procedure. That’s not how I intend it. Rather, I think we need to act on rough-and-ready heuristics in many cases, and perhaps not only because it’s difficult to calculate on the fly; it may also be that one or more of the normative theories in which an agent has credence say that calculating expected value prior to action is *bad* in some way; or it may be be that it’s “rationally bad” independently of what one’s subjectively probable theories say (in the same way that, acc. to people who favor risk aversion over exp. val. maximization, risk is an irrational-making feature of A’s actions independently of A’s views about risk.)
Anyway — several not-implausible grounds for not using expected value maximization as a decision procedure. But now we’re left with the question, “Which heuristics should we use to guide our behavior?” Some heuristics are such that, when we follow them, our actions will more closely approximate those favored by expected value maximization. Other heuristics are such that, when we follow them, our actions will more closely approximate those favored by the theory in which one has the highest credence. An example may help: In a recent paper in Phil Studies, Alex Guerrero argues that when you’re not sure of a being’s moral status, you shouldn’t kill it. This seems like a good heuristic if expected value maximization is the right norm of rational action under normative uncertainty. But if the right norm is, instead, that it’s most rational to act in accordance with the theory in which one has the highest credence, then a more reasonable heuristic will be, “If you think a being probably doesn’t have much more status, go ahead and kill it.”
Anyway, I hope that makes some sense. Also, I should say that I’m defending my dissertation in early December, and it’s all about action under normative uncertainty. If anyone’s interested in flipping through it, just e-mail me: firstname.lastname@utoronto.ca; it should be in send-out-able condition in a couple of weeks.
“What possible reason could there be for preferring H1 theories to H2 theories?”
Suppose:
1) You ought to prefer H2 theories
Then:
2) It is impermissible for you not to prefer H2 theories.
but according to H2:
3) The permissibility of your not preferring H2 theories is affected by your normative beliefs.
So if
4) Your normative beliefs happen to be the wrong way
Then
5) It is permissible for you not to prefer H2 theories!
Conclusion: Aarguments for H2 theories are self-defeating.
Hi Simon,
First, I’m not totally sure what it is to prefer a theory, or whether the ‘ought’ in premise 1 is epistemic, or pragmatic, or something else. So I may be misconstruing what’s going on here. But I think I want deny premise 1. H1 theories are, roughly, theories of what one should do in the non-normative belief-relative sense of “should”. H2 theories are, roughly, theories of what one should do in the normative belief-relative sense of “should”. They’re theories of different things. Now, it might make sense to prefer a theory of X that is good *as a theory of X* over a theory of Y that is good *as a theory of Y*. Even that I think is contestable. But I can’t see what sense it makes to prefer theories of X as a class to theories of Y as a class. Imagine preferring theories of what killed the dinosaurs, as such, to theories of material constitution, as such.
Second, and along the same lines, the argument is valid only if the notion of permissibility is univocal throughout. I suspect that it’s not. The permissibility in premise 2, insofar as I understand it, seems to be a kind of objective permissibility, while the permissibility in premises 3, 4, and 5 is permissibility of precisely the sort that H2 theories are about — what we might call “normative belief-relative permissibility”.
-Andrew