There are many ethical theories that think of right and wrong in terms of what consequences the general adoption of moral principles would have. Contractualists think of what consequences the principles would have for individual lives; rule-consequentialists think of what consequences they would have for the aggregate wellbeing and other general values; Kantians think of the consequences which the principles would have for the effectiveness of our subjective principles of action; and so on. It’s now generally accepted that we shouldn’t compare the moral codes at universal, 100% level of acceptance. This is because we want to also generate principles for self-improvement, dealing with moral disagreements, punishment, and so on. It is an interesting problem at what lower level of social acceptance should we compare the principle. But, here I am just going to assume that there is some such level n% where n is less than 100 at which it makes sense to compare the consequences of different moral codes. It has become evident that this idea leads to further problems which the defenders of these theories haven’t really tackled yet. These problems have surfaced in various discussions I’ve had with many people but they have also been formulated very forcefully in the last section of Holly Smith’s wonderful 2010 Utilitas paper “Measuring the Consequences of Rules”. Here I want to introduce the problem and also three ways of trying to respond to it. I’m still sceptical about these answers but I would like to hear other people’s opinions about and also whether there might be other solutions.
So, let’s assume that we are comparing different moral codes when n% of people in the world have internalised the codes where n is less than 100% and more than 0%. We are doing this in either contractualism, RC, or Kantian ethics. We can then ask two awkward questions that will have important consequences for what the compared outcomes will be. The first question is, who are the people who haven’t internalised the moral codes? It could well be that, whichever individual code we take, that code will have one set of consequences if one group of people have internalised it and another set of consequences if different people have internalised it. So, who are the people we should use in the comparisons?
I believe that Tom Hurka has provided a nice solution to this question. The idea is that we can rank people in terms of how easily they naturally adopt moral codes. This ability might be based on genetic makeup which is responsible for different cognitive and emotional capabilities that influence how easily the person learns morality. The idea then is that we assume that the non-internalisers in the comparisons are the people that are hardest to inculcate. I think this perfectly solves the first problem.
However, I do think that there is an even more serious problem which this does not solve. This is because I do not think that whatever in the genetic background and other features of the individuals that determine how well they learn morality also decides what these individuals will do and think (or what kind of practical motivations and sensibilities they will have). Instead, I do think that their behaviour and thoughts are a function of both nature and nurture. The same chosen group of people can intuitively have different ways of being non-internalisers – call these different counter-cultures. Some counter-cultures may have partial overlaps with the morality which the majority has internalised, others can be more like the counter-cultures of anarchists, libertarians, psychopaths, mafiosos, hippies, and so on.
Now, the problem is that whichever counter-culture we take this will have very different consequences for what the outcomes of the adoption of different codes by the majority will be. Different moral codes presumably interact with different counter-cultures which means that with respect to different counter-cultures different moral codes might come on the top in the contractualist, rule-consequentialist, and Kantian comparisons. The problem is that it seems to very difficult to give a satisfactory and non-arbitrary answer to the question of what counter-cultures we should use in the comparisons of the moral codes. Here are few alternatives:
1. We should compare the codes with n% level of acceptance with respect to all possible counter-cultures. We should not limit our comparisons to merely reasonable counter-cultures as we do need duties for how to deal with the non-reasonable ones. Of course, if there were a moral code that was the optimal with every counter-culture then that would determine what is right and wrong. This, however, is unlikely. To overcome this problem we could calculate the expected consequences of the given code with respect to all the possible counter-cultures. Then the optimal code would be the one that would have the best expected consequences either in contractualist, rule-consequentialist, or Kantian terms. We would either need to assign the same probability for all counter-cultures or consider how probable they would be given some majority moral code. My main worry about this alternative is that these theories would become even more epistemically demanding as a result. I worry that on this view makes facts about right and wrong would become unknowable.
2. We could also begin from our intuitions about right and wrong and look for reflective equilibrium. That is, we should choose the compared counter-culture so that when we compare the codes with respect to that counter-culture we get from the procedure principles out such that they fit what we intuitively take to be right and wrong. My worry with this solution is that contractualism, rule-consequentialism, and Kantian theories become even more redundant.
3. Here’s my favourite solution at the moment. So far, when we compare the different moral codes we have been thinking that the natural resources of the worlds remain the same. We are comparing the principles for the ‘circumstances of justice’ of the real world as Rawls put it. So, I am thinking that we could think of the non-internalisers as a natural resource of our real world and keep them constant across the comparisons. I know this is difficult, but at the moment there is a positive morality in our society – a moral code which the majority has internalised. Then there is some percentage of people who have not internalised that code but rather perhaps only parts of it together with different counter-cultures. In fact, we can list different countercultures and determine how many percentage of the non-internalisers have internalised those. Now, the actual level of non-internalisers might not be 100-n%. But, when we have the level of acceptance n% for the comparisons, we could stipulate that the rest of the people in the compared situations have adopted the same proportions of counter-cultures as the non-internalisers in the actual world. At the moment, this is the best solution that I can think of. The advantage is that the principles we could get as a result would be able to then serve a function in our real world with respect to the kind of people we find here.
In any case, I would be interested to hear what you make of these solutions and whether you think there might be better ones.
Hi Jussi,
I am not sure I fully understand your suggestion in section 3. But a problem you might run into here is that you seem to be assuming more agreement on moral issues than there actually is. Let the society in question be the US or some US state, and let the moral issue be that of abortion, same sex-relations, euthanasia, gun-control, or whatever, and you will find that with regard to these issues, there is a lot of disagreement, and a lack of a generally agreed upon moral code. What there might rather be are competing blocks, or large fragments, within the society within which there is broad agreement about at least some of these issues. So it seems to me that in addition to what you call counter-cultures, you also have to take into account much larger groups that have internalized only partially overlapping moral codes. Now of course if we make the moral rules or principles in question extremely general, and we leave them open to more or less conservative interpretation, then we might end up with more wide-ranging agreement. But then we will also need discuss what the best, or most justifiable, specifications of these broad and widely accepted moral rules are.
Hi Sven
thanks. I think I mentioned above that I am also thinking about partially overlapping cultures. Also, I am not really thinking about moral principles but rather more moral sensibilities. I agree that it will be very difficult to capture or pick out a majority code from the actual world (and in fact, the acceptance level of this will be very low in our world). But, yes, the idea then is that in the actual world we can describe the remaining both overlapping and non-overlapping non-internalisers and how many of the non-internalisers are like that in proportion.
When we then go compare the hypothetical principles, we stipulate that the non-internalisers in those worlds are in proportion similar to the non-internalisers in our world (even if their absolute number may vary). I’m also thinking that the overlappers will be overlapping with the compared world’s code rather than with ours.
Hi Jussi,
Could you say more about why you think that your preferred solution (3) avoids your epistemic concerns about the first solution? If I understand correctly, the first solution is going to involve some very complicated (impossibly complicated?) calculation on more or less a priori data. On the other hand, it seems like the third solution would require vast amounts of empirical data–unless we make rough guesses, in which case, why couldn’t the defender of the first strategy do the same thing?
(It’s also possible I’m just misunderstanding something.)
Hi Preston
that’s a good question – and I am very worried about the epistemic demandingness of all these alternatives. I guess I was thinking that (3) would be at least tiny bit less demanding than (1) for a couple of reasons. For one, it would limit the number of the compared worlds somewhat from (1). Secondly, as you note, we could use some of our empirical knowledge in comparing these worlds. In the actual world already many kind of different ethical principles and sets of them have been adopted. By keeping the kinds of actual people who are non-adopters constant, we could use our information about how these principles work together with the actual people who have not adopted them.
I do know that there is one way of making the epistemic demandingness smaller in both proposals. This would rely on going down something like Brad Hooker’s route. On this alternative, we consider what improvements we can recognise for the principles we have in the terms of the principles’ consequences. So, under (3) we could think that right and wrong depends on what principles we could identify as having better consequences than the ones we already have assuming n% of acceptance and that the rest of people have proportionally similar counter-cultures than in our worlds. Under (3) we could think of whether we could recognise improvements with relative to all counter-cultures when we do the averages. I think even here (3) would have an advantage as it would be easier to recognise improvements.
Of course, if you are a realist about ethical facts, then this kind of unknowability will be harmless. However, if you do have anti-realist inclinations (which often motivates views like contractualism) then the epistemic demands are a strike against views along these lines.
I suspect that there is no really satisfactory solution to this problem. However, I think that the best stab that we could make at such a solution would have to be sensitive to what we know about empirically about how people respond to societies in which different sorts of moral codes are very prevalent. Suppose, for example, that we’re thinking about a moral code that is very Victorian where sex is concerned, and that we have empirical reason to think that in a society in which enough of a teaching effort is made to instill such a code in (say) 90% of the population, many people in the other 10% are likely to react violently against the code and be wildly promiscuous or fetishistic. For most moral codes that we’re interested in, we probably can’t really make very confident predictions about what counter-cultures would emerge if enough of a teaching effort were made to induce a given percentage of the population to internalize the code. And, of course, there may be multiple forms that this “teaching effort” could take, complicating the problem further. That’s why I don’t really think there’s a solution. But I think that attempted solutions that ignore these sorts of empirical considerations get off on the wrong foot. If I understand your proposals correctly, Jussi, then that applies to all three, although I’m not sure that I’m interpreting the last one correctly.
Hi Dale
I agree. I worry that this really is a killer objection. One idea I’ve been thinking about is to go back to the 100% in comparing the codes and forget acts of self-improvement, punishment, disagreements and the like. In this way, we could construct an ideal theory for ideal circumstances as Rawls called it. This might still teach us something about the nature of morality. Of course, we would need to find some other way of figuring out what’s right and wrong in the non-ideal circumstances and this presumably would make all the theories discussed less important given that in the actual world we are in the non-ideal circumstances.
What you correctly observe makes the problem even more difficult. In all the solutions, I sketched out I assumed that the counter-cultures are not sensitive to the majority moral code – they pretty much are doing their own thing. Even then different counter-cultures might have different consequences given that what kind of interaction happens between the counter-cultures and the majority morality. However, as you correctly point out, it may well be that the nature of the counter-cultures is sensitive to the majority codes. I do think that making this plausible assumption makes the problem even more intractable despite the empirical evidence we have about how the counter-cultures are shaped by the majority moralities. If we had to compare vast number of potential morarilities in the circumstances of imperfect adoption, the consequences would in this case really be unknowable.
I’m starting to think that Brad Hooker was right. I think the best we can do is to start from the actual world and our conventional morality. We can then think whether we could have recognisably better codes for our circumstances. In trying to recognise these codes, we should take into account the number of people who have not internalised the code yet, whether that number would change if we adopted the new code, and whether the behaviour of the non-adopters would change. If we can recognise an improvement, the right and wrong would be determined by that code. At the moment, I cannot see anything better than this really.
I’ve been trying to think through a different kind of solution to this problem, or rather a different version of RC/RU for which the problem doesn’t arise. That is an “individual” ideal-code theory, according to which the moral code that I ought to obey is the one that it would be optimal for me to internalize, given the world as it is. Needless to say, even if this sort of view doesn’t face this problem, it faces lots of others.
Hi Dale
that’s very interesting. I’d like to hear more about this version of the theory.
My first inclination is to say that you can generate the theory even for the individual theory. You’ve got two options. Either you believe that all your future time-slices will internalise the compared moral codes. In this case, the moral code will not be able to generate what Ross called duties of reparation (things like the requirement to compensate for harm done). To avoid this problem, you might think that only less than 100% of your future time-slices will have internalised the code so that you can evaluate what the best policy for reacting to your own bad actions is. But, then we can ask what are the rest of your timeslices doing. Whatever we stipulate seems somewhat arbitrary and whatever we stipulate might have consequences to what principles come out best in the individualistic comparisons.
This is just to point out that this seems to be more general problem with the theories of this type.
Hi Jussi,
Doesn’t consequentialism have a standard solution to problems of this kind, namely, expected value?
For each code C, consider the expected value of the proposition: n% of people internalise C. As you say, there are different ways this could be true: different sets of people could be the ones who internalise C. So the expected value will be a weighted average of the values of these different ways it could be true. The ideal code will be the one for which this proposition has the greatest expected value.
Hi Campbell,
thanks. I may be missing like something but this really sounds like my first solution candidate (1) and I think this really is one plausible avenue which rule-consequentialists and even contractualists and Kantians should take (even if they should make relevant changes to how to understand expected value in non-aggregative way).
One thing we would need to decide is whether all the alternative counter-culture scenarios for a given code are equally likely or whether we should consider some of them more likely than others.
I do have a couple of worries. My first worry is that this will make all the theories of this sort very, very epistemically demanding. Now, you might think that this is not a problem as these views are more criterions of wrongness rather than deliberation procedures. But, I worry that this sort of complications will really make the wrongness facts unknowable which will severe the connection between wrong and our practice of blaming people.
I think my second worry is even more fundamental. These views are supposed to tell us something very fundamental about either what wrongness is or what makes acts wrong. Now the idea is that it is in part constitutive of right and wrong what consequences codes have even in worlds in which very bizarre counter-cultures have been adopted. That this is so far-fetched really worries me too.
Sorry, Jussi, I didn’t read your post carefully enough.
Still, your solution (1) seems not quite the same as what I suggested. Notice, whereas my solution employs the notion of expected value, yours is expressed in terms of “expected consequences”. I’m not quite sure you mean by this.
Perhaps I can make the point this way. It is unclear to me how the expected-value solution could work for contractualist and Kantian theories. The mathematical concept of the expected value of a random variable only makes sense when the possible values of the variable are numbers. In the case of consequentialism, these numbers may be taken to represent the goodness of possible outcomes (propositions or states of affairs or whatever). What would they represent on a contractualist or Kantian approach?
Perhaps this is related to the problems for non-consequentialist theories raised by Michael Smith and Frank Jackson in ‘Absolutist moral theories and uncertainty’.
Hi again, Campbell
also I hope you are well. I did have expected value of the codes in mind in (1) but I was sloppy and expressed the idea badly. Sorry about this.
Here’s just a rough sketch of how we could run something similar in the contractualist framework. Assume that we are comparing two codes A and B for reasonable rejectability. We compare these codes under internalisation rate n which is less than 100%. We are considering the codes in the circumstances of three countercultures p,q, and r.
Each pair [A,p], [A,q], [A,r], [B,p], [B,q] and [B,r] will create the relevant individuals of the imagine worlds a standpoint. There is a degree of how burdensome or objectionally or not choiceworthy these standpoints are. Let’s assume that we can give a numeric value for such personal burdensomeness of living under a giving code.
For each individual, we can then calcute expected burdensomeness of a given code. For individual S, the burdensomeness of A for example is (probability of counterculture p when A has been internalised (we can assume that this is .33) times the burdensomeness of S’s life in [A,p]) plus (the probability of counterculture q when A has been internalised (.33) times the burdensomeness of S’s life in [A,q]) plus (the probability of counterculture r when A has been internalised (.33) times the burdensomeness of S’s life in [A,r]).
Each individual then has an expected burdensomeness of their life under codes A and B. Assume that individuals can make objections to A and B on the basis of expected burdensomeness of these codes for them. The person whose life under A has the highest expected burdensomeness can then reasonably reject A if no one’s life under B has equally high expected burdensomeness.
Modelling the Kantian approach in the same way might more artifical but I take it this can be done in terms of expected effectiveness of your maxims under different situations when this is understood roughly in the same way as burdensomeness under contractualism.
Thanks, Jussi. That’s interesting.
This is a bit tangential to your original point, but I wonder what you think of the following objection to the view you just sketched.
Suppose there are only two individuals, I and J, and two possible outcomes (or counter-cultures), X and Y. And suppose the burdensomeness of each code for each individual in each outcome is as shown in the following table:
For both individuals, the expected burdensomeness of A (5) is greater than the expected burdensomeness of B (4.5). So both individuals can reasonably reject A. However, one might think that A is the better code, because, although there will be greater total burdensomeness under A, it will be more equally distributed.
Perhaps you should say that the optimal code is the one that minimises the expected maximum burdensomeness (EMB), rather than the maximum expected burdensomeness (MEB). In the above example, the EMB of A (5) is less than the EMB of B (9), but the MEB of A (5) is greater than the MEB of B (4.5). But I’m not sure how minimising EMB could be connected to the story about reasonable rejectability, which seems to fit better with minimising MEB.
Hi Campbell
thanks for this. This is indeed getting very interesting. So, first, let me know that I am little hesitant about having moral intuitions about mere numbers without knowing what the numbers stand for (this is I am sceptical about Ross’ objection to consequentialism too). Thus, if we described what makes up these numbers, we might agree that code B is better.
If I’ve understood EMB correctly, I’ve got one worry about it. I also got two ways in which MEB might be defended against this kind of objections.
My worry about EMB is about cases in which instead of two outcomes/counter-cultures we have thousands. Now imagine that there is a code which is very good for an individual in all of these thousands of scenarios except in one in which fairly bizarre (and also by stipulation unlikely) counterculture has been internalised. In this situation, according to EMB (I hope I am getting this right), the agent could reasonably reject this code despite the fact that it is very very good for her in all except one of the scenarios. This seems unintuitive to me. The code that minimises the expected maximum burdensomeness could therefore be fairly burdensome to all if for all other codes there was just one scenario that was more burdensome to all in a unlikely scenario. This is just a hunch though.
Let me also note two ways in which MEB could be defended. First, you could built some notion of equality inside the burdensomeness of individuals’ lives. Thus, it would be part of the concrete personal standpoint – how the life goes for an agent – how she is related to other people. Finding your life unequal to others would thus be making your life more burdensome. Only after this would be assign numbers to the burdensomeness of life. This might make it the case that once we have the numbers MEB actually gives fairly intuitive codes out even if the numbers might look bad.
The second option would be to go prioritarian. This would be to think that, when we calculate the expected burdensomeness of codes with respect to a range of outcomes, we multiply the burdensomeness of the most burdensome standpoints in these worlds by some factor. This too might help with the equality worries.
Here’s how to calculate the EMB of a code C. For each possible counter-culture X, find the maximum burdensomeness experienced by any individual under C given X. Then multiply this by the probability that X would emerge given that C is adopted. Then add together all the results.
I don’t think what you say about EMB in your worry is quite right. As I said before, it is hard to connect EMB with reasonable rejectability. The fact that one code has a greater EMB than another does not obviously provide any particular individual with grounds for rejecting the former code. In my example above, Code B has a greater EMB than Code A, but it is unclear how either individual could appeal to this fact in order to reject B. Indeed, we might expect that neither individual would even want to reject B, because both have a greater EB in A. This may be a case where the code which is preferred by everyone is not the best code.
I think these issues are related to so-called ‘ex ante’ and ‘ex post’ prioritarianism, as discussed e.g. by David McCarthy in ‘Utilitarianism and Prioritarianism II’ (a really nice article).
Hi Campbell
thanks for clarification. I hadn’t got EMB right. Can I ask one more question about this proposal? When you say in the end ‘then add together all the results?’, is it that we add different individuals’ burdens under different cultures (whoever’s is the most serious one under the code + counterculture pair) times their likelihood’s?
If this is the case (and this I think made me read the view wrongly), then we are making interpersonal aggregations are ruled out in contractualism. This explains, as you say, why there isn’t a connection to reasonable rejection.
Thanks for the suggestion about the McCarthy paper.
I’m afraid I don’t quite understand the proposed solution, but in any case the general proposal to rate our moral principles as a function of how they would work in some ideal or semi-ideal world (Kant, Scanlon, etc.) always struck me as rationally unmotivated, since we don’t live in such a world (in this I agree w/ the remark Jussi ascribed to Hooker, although I found Hooker’s presentation of his view less clear than this simple summary, at least in _Ideal Code_). Such a test may be a /necessary/ condition of our principles being moral, but it is certainly not /sufficient/, since the good and bad effects of additional persons conforming to certain principles is sometimes non-linear. Some principles may work fine in an ideal or semi-ideal world, but awfully in others.
What we need are principles that work wherever we happen to be, including ones for the current world, of course–but we must also prepared to act differently if the number of people following certain principles changes which in turn changes the effects of our acting on our current principles, just like we should be prepared to make similar changes under other circumstances. All versions of ideal-world tests seem to simply be proposals that we /exclude/ from consideration any principles which would specify–in their antecedent description of under what circumstances we should perform certain consequent actions, or value certain things, etc.–the class of descriptions including features like “and when the number of persons following similar principles is M…” I have never understood why anyone would want to rule out of bounds such principles in a fundamental moral test.
So if you start with an ideal world test, yes, you’ll need some more or less complex way to then try to rule out principles which lead to this result, but this is the long way around. I think we should instead abandon the ideal world condition, and go straight to moral supervenience, requiring that our moral principles be acceptable wherever their antecedent conditions hold, with no restrictions on what kinds of antecedent conditions are to be considered. This makes the test of whether we want *any* number of persons following our principles be the guide, which then forces us to adopt (in many cases) principles which prescribe different actions under different circumstances, including those in which different numbers of people follow just those principles.