We are very pleased to have Jonathan Quong commenting on Simon May’s “Why Strict Compliance?” A reminder you can find Simon’s paper open access here. Here now is Jonathan:
I learned a lot from Simon’s excellent paper, and I hope it will become a focal point for discussion of ideal theory. I’m very sympathetic to the main idea, even if I’m not entirely sure how all the details work.
Simon aims to answer the strict compliance puzzle: “why should moral norms for responding to the widespread injustice of the actual world depend on an account of a hypothetical world in which every agent complies with the requirements of justice?”. His answer is complicated, but it goes roughly as follows. A just society is a grand scheme of social cooperation for reciprocal advantage amongst free and equal persons. An essential feature of genuine schemes of cooperation is that the participants are mutually accountable to one another. Mutually accountability involves the standing to demand that others comply with the rules, and failures to comply trigger a suspension of the default duties of normal cooperation. And default rules of normal cooperation must assume strict compliance—they are the rules that obtain when there has been no violations that would trigger a suspension of the default rules. Thus, genuine schemes of social cooperation begin with a strict compliance assumption.
1.
I want to raise two questions about Simon’s account. First, how much of this is stipulative and how much is substantive? Simon says that genuinely cooperative schemes have various features (e.g. they are fair, there is mutual accountability). These features are crucial in getting to the strict compliance assumption. But is this a purely stipulative or technical notion of a genuinely cooperative scheme? If so, this would undermine the importance of the article. The article would amount to a demonstration that, given a certain stipulative definition, such schemes depend on a strict compliance assumption.
So I assume Simon intends his picture to be a non-stipulative analysis or account of the concept of genuinely cooperative schemes. But I’m a bit skeptical that genuinely cooperative schemes must have all the features Simon identifies. Football teams, for example, seem like they can be genuinely cooperative schemes amongst the players and staff. But they don’t have to be fair, nor need they involve the kind of mutual accountability Simon describes. The aim of at least some of these teams (though clearly not Arsenal) is to win matches. It may be that the best route to building a winning team is to let some players get away with things (e.g. the super star player may be most effective if he is not sanctioned when he fails to do his share of the unglamourous defensive work, or fails to conform to the off-field rules of conduct). I think teams like this can be genuinely cooperative schemes in a colloquial sense, but they aren’t necessarily structured around fairness, and they lack a consistent version of mutual accountability in Simon’s sense. But if the features that generate the strict compliance assumption aren’t part of the fabric of genuinely cooperative schemes—if they’re only part of some sub-set of such schemes—then we need moral or normative arguments for their inclusion. Simon’s paper doesn’t offer such arguments, but maybe he would agree that this is part of the larger project.
2.
Second, I have a question about how to understand the strict compliance assumption. As Simon emphasizes later in the paper, the strict compliance assumption is consistent with non-utopian theorizing. He says we might set up cooperative schemes (which assume strict compliance) to deal with past injustice (he provides an example of democratic transition away from dictatorship). The principles governing such a transitional scheme won’t be utopian—they don’t represent an ideal objective.
But if we can have fair cooperative schemes set up to respond to past non-compliance, why can’t we have fair cooperative schemes set up to handle ongoing non-compliance? Suppose we notice that some people in the neighborhood are leaving trash in the local park, but we lack the resources to identify the culprits. This creates a burden, and we decide to set up a new cooperative scheme where the burdens of clean up are distributed fairly. The new clean up scheme takes for granted the existence of a cost, and is designed to fairly allocate this cost. But the cost arises from regular acts of non-compliance. We also know (given the isolated location of our neighborhood) that the people creating the cost are part of our neighborhood. Can we have fair cooperative scheme to handle this fixed cost, even though it presupposes non-compliance?
There are at least two responses available:
Option 1: The scheme is a genuinely fair system of cooperation and consistent with the strict compliance assumption. The non-compliance costs that the scheme seeks to fairly allocate are external to the scheme, and thus taking them as given is not a violation of the strict compliance assumption.
Option 2: The scheme is only part of non-ideal theory—how we respond to non-compliance. But the clean-up scheme is a downstream component of the much larger cooperative political project, and that project has a set of default rules where perfect compliance is assumed.
Option 1 is unappealing. It holds that the strict compliance assumption is satisfied even when we assume lots of ongoing wrongdoing so long as that wrongdoing is deemed “external” to the scheme. This seems inconsistent with the spirit of the strict compliance assumption.
Option 2 is, I think, the more promising response. But it raises the question of where we draw the line. Why do we say the park cleanup is a downstream response to a failure to comply with a more general scheme, but we don’t say the same of the transitional government rules?
Another problem is that Option 2 does not seem to help when we consider cooperative schemes designed to take into account people’s propensity to engage in non-compliance. Perhaps, ideally, it would be best if everyone participated in a cooperative scheme with rules R1. But we know the demandingness of those rules, in conjunction with human frailty, mean that a scheme structured around R1 will have lots of non-compliance, the consequences of which will be very bad. We can instead opt for a different cooperative scheme structured around R2, which is less demanding and as a result there will be near-perfect compliance. The overall consequences will be much better under R2, so we create our cooperative scheme with R2 as the rules. In one sense, R2 appears consistent with the strict compliance assumption, since it has a set of default rules that assume strict compliance. But in another sense, we only opt for R2 because we correctly anticipate a lot of non-compliance if we tried to implement a very different, and more demanding, scheme.
I’m not clear whether Simon believes this justification for the R2 scheme is consistent with the strict compliance assumption. If he thinks that it is consistent, I worry that Simon and some strict compliance skeptics are talking past each other. Simon is insisting that genuinely cooperative schemes begin with a set of default rules with which we assume everyone complies. But I suspect some skeptics of the strict compliance assumption aren’t denying this. What they’re insisting upon is that which schemes we decide to set up in the first place must be guided by our non-ideal or realistic assessments of the population’s likely levels of non-compliance with different possible schemes.
Thanks to Jon for his excellent comments and to Dave for providing the forum for discussion.
Before responding to Jon’s comments, let me say something about how I think about the paper in general. Essentially, my goal for the paper is that it shakes up our standard ideas about what ideal theory is supposed to be about. My belief is that there is a way of thinking about the strict compliance assumption (SCA) that has very little to do, in the first instance, with the destination-setting agenda of (what I call) the telic approach. Instead, there is a more distinctively Rawlsian way to conceptualize the SCA, one that draws on substantive Rawlsian ideas and values, primarily about cooperation, and which therefore does not need to be shared across a variety of quite different political philosophies or theories of justice. That is, if you buy into the basic Rawlsian premise that a just society should be modeled as a fair scheme of cooperation, with all which that entails, then you have a route to the SCA that other approaches need not have.
Three immediate caveats:
(1) I do not wish to claim that the argument I advance is what Rawls really had in mind, or is the best way to interpret the first two sections of A Theory of Justice. Rawls’s own statement of ideal theory and the SCA is definitely suggestive, at the very least, of the telic approach. My argument here, however, if it works, is perhaps something he could have said about ideal theory.
(2) I do not wish to claim that non-Rawlsian theories could not employ something like the strategy I set out. I have tried to think in the past couple of years about just how exclusive to justice as fairness the approach is, but I do not have very many well-grounded conclusions on the matter.
(3) Most important of all, I think the paper is best read as a provisional sketch of a longer argument; a longer argument that others may do a better job of constructing than I would. There are a great many points where far more could be said about the argument than what I say. If the overall trajectory of the view strikes people as plausible, then I hope that they might be persuaded to help develop the broader architecture.
Jon asks how much of my argument is substantive and how much stipulative. The more stipulative it is, the less interesting the conclusion. But if it is not stipulative, then it is not at all clear that genuine cooperative schemes have the features I claim. In particular, it is possible for, e.g, a football team to be genuinely cooperative without being organised according to fair terms or with any commitment to mutual accountability.
Let us just cut to the chase and call this hypothetical football team “Arsenal” because (i) some of us have still not forgotten 1989 and (ii) there is nothing better than to kick a team when it is at the very bottom of the Premier League table, beneath Norwich (a small town in East Anglia).
There is a clear sense in which Arsenal can be a perfectly good example of a more or (more likely) less functional cooperative scheme, whatever the content of its internal rules. If we adopt, say, Michael Bratman’s account of cooperation, then each player can intend to play their part in the overall plan, and intend that their intentions mesh with those of the other players, etc. This is so even if part of their modus operandi is to unfairly privilege one or other of their prima donna players, and even if they lack the type of mutual accountability relations I set out in the paper.
So there is normative stipulation in my argument: “genuinely” cooperative schemes are (a) fair and (b) characterized by mutual accountability. But note that for the first normative stipulation, at least, I’m simply following Rawls: “the idea of cooperation includes the fair terms of cooperation” (2001, p. 6). I mean the second normative stipulation to be in the same vein.
Once we have these two assumptions, (a) and (b), we still need extensive argumentation about what follows from them. If Rawls is right that the basic structure constitutes a fair scheme of cooperation only when it is governed by the difference principle (or something close to that), then I do not see it as a problem that he starts out by incorporating fairness as a normative element in his conception of cooperation. Similarly, if there is a cogent argument that mutual accountability in fair cooperation requires a strict compliance assumption in the justification of the governing rules, then I do not see it as a problem if, for the sake of that argument, we stipulate at the outset that genuine cooperation, normatively speaking, includes mutual accountability. The argument still has to be made, especially if the link between mutual accountability and strict compliance is not obvious.
Moreover, if, at some point, independent support can be given for the idea that cooperative schemes should incorporate mutual accountability, then the stipulation can fall away. I find the idea intuitive myself, and part of what helps me distinguish the kind of cooperative scheme Rawls has in mind from mere centrally coordinated activity. I’d be interested if others found the idea similarly intuitive. But, whatever the force of that intuition in itself, I think other ideas and values can be marshaled in support of mutual accountability. Some of these can be drawn from, e.g., rights theory, and the Feinberg-style intuition that having the standing to demand the performance of a duty one is owed is an important part of what it is to have dignity, to be able to look others in the eye, and so on. (As I mention in a footnote in the paper, there are some relatively technical differences between the Feinberg view and the argument I develop, but these I set aside for another paper.)
Let us return to poor godforsaken Arsenal. Yes, there is a perfectly good descriptive sense in which they are a thoroughly cooperative endeavour (just not a very good one). But from a normative perspective, if I were one of the less celebrated players, there would be a very clear message that I should shut up and know my place. I would think of myself as more of a cog in a machine, than as a member of a team. I would hope that the manager has the wherewithal to enforce discipline against the more troublesome players, so that we may avoid the ignominy of certain relegation, but I would not consider myself as part of that exercise in collective governance, except as I may be instructed to play some role by the manager. I would not be a citizen of the Arsenal eleven, but an employee of the firm, with a pay cheque rather than a share in sovereign power. And perhaps that is the best way to run a football club, but there is a kind of cost here: an instrumentalisation of the relations between people and a sense of moral alienation from each other. That cost may be worth it for the sake of a highly competitive sports league, or a lucrative business enterprise, or an efficient military force, etc., but it would be a dismal prospect if society as a whole were, at bottom (where, again, Arsenal is) merely functional in that way.
Jon asks how exactly we are to understand the SCA if, as I argue, it is not essentially utopian. He gives us a really neat example of an isolated neighborhood — I’ll call it Blikkiesfontein — where the neighbours must organize a fair clean-up scheme because some of their number keep violating rules about not littering in the neighbourhood park. He present two options about how to understand the clean-up scheme:
Option 1: The scheme is consistent with the SCA since the non-compliance it responds to (i.e., the littering) is external to the scheme.
Option 2: The scheme is an instance of non-ideal theory, but part of a broader cooperative project, which has ground rules justified in accordance with a SCA.
Jon says that Option 1 is unappealing, since it means that the SCA is consistent with a great deal of wrongdoing, as long as this wrongdoing is considered external to the scheme. Option 2 is more promising, he thinks, but it raises some questions for my view.
I think I will bite the bullet on this example, and adopt Option 1. It is, I admit, a strange ideal theory since it is predicated on the assumption that some of the participants are failing to comply with the terms of another scheme that comprises the very same people as participants. This is an odd situation, but I think it is odd because the one-to-one correspondence between the schemes’ participants is relatively rare.
To take a few steps back, let us forget for the moment about how much wrongdoing or disaster or suffering there may happen to be in the world. Suppose that some of us wish to cooperate with each other, in the normatively loaded sense discussed above, to achieve some good end. That is to say, we wish to set up some rules that structure our interactions, as adapted to our present circumstances, in a way that is conducive to our desired end, and we wish to regard ourselves as morally accountable to each other for conformity to these rules. The argument of my paper is that this requires us to first specify a strict compliance rule, i.e., a rule that defines what each of us should be doing on the assumption that everyone does what they should. The justification of that rule therefore constitutes an ideal theory — in the sense that it is a strict compliance theory for a particular set of circumstances — *whatever* the broader state of the world may be and however imperfect, rotten, horrible, nasty, or brutish its condition.
In particular, it does not matter if the desired end of our scheme of cooperation is to eliminate the very circumstances that lead us to believe that our scheme of cooperation is needed in the first place. This would make it a non-utopian ideal theory. Our ultimate goal is not to bring about a state of affairs where everyone perfectly complies with the rules of the scheme. Our ultimate goal, instead, is to have *no one* comply with the rules of the scheme, and that because circumstances are such that the scheme is no longer necessary or appropriate. Perhaps there are more efficient systems of rules that we could employ to reach this end, rules that don’t assume strict compliance, but if we do wish to be morally accountable to each other in our fair scheme of cooperation, then there must be some specification of what we all should be doing on the assumption that all do as they should.
Suppose (1) we are a group of strangers who have been transported to Forbidden Island by some hostile power. We all want nothing more than to get back to our own Paradise Islands, where we will all go about our separate lives and have nothing to do with each other for the rest of our lives. Leaving Forbidden Island requires the construction of a vessel. Fortunately, this is possible with the resources at hand, but it will take some years and require us to cooperate with each other, which we elect to do in a fair and mutually accountable way. We must therefore develop a strict compliance rule defining our common expectations of each other, and then supplement that strict compliance rule with various non-ideal rules for dealing with pilfering or malingering, etc. Hopefully it will work. If the scheme starts to collapse, we have various choices: we might resign ourselves to life on Forbidden Island, and make do as best we can, or we might start thinking about a more dictatorial, Hobbesian solution to our predicament. Ideal theory, understood as strict compliance theory for a fair scheme of mutually accountable cooperation, is not the only way to resolve a collective problem.
Or suppose (2) we are a cooperative group dedicated to bringing awareness of human rights abuses in another country to our compatriots. Or to counteracting racist stereotypes in public discourse. Or to removing other people’s litter from a communal park. In these cases, something decidedly imperfect is once again taken as a premise of our scheme of cooperation. Perhaps we might be optimistic enough to think that our cooperating with each other might one day lead to the elimination of the relevant imperfection, but it is more likely that we regard our desired end as something like a constant struggle or pushing back against the enduring tide of human rights abuses, racist stereotypes, or litter. Again, I would think of that as a non-utopian ideal theory. We should certainly hope that the scheme is one day no longer necessary, even if there is nothing in the way that we design the rules that can be expected to lead to that state of affairs.
Returning to the example of Blikkiesfontein, the problem is that the people leaving the trash in the park are the very same people who must now clean it up. It is a little odd (or perhaps more than a little odd) to regard this as an external circumstance when it is a problem created by the people who are supposed to be cooperating with each other in the clean up. The litterbugs are, as it were, inside the group! But does that matter? Would the terms of the clean up scheme be any different if the trash were left each week by visitors to the neighbourhood? If not, then the problem the clean up scheme addresses may as well be purely external. The basic terms of that scheme, in itself, with which we assume people strict comply, refer to each neighbour’s scheduled allotment of labour, provision of gear, rules for disposal, and so on. And the participants can strictly comply with those rules, even as the litter still appears each week.
There’s a broader scheme of cooperation, of course: something like living in Blikkiesfontein as a good neighbour. If this broader scheme is also a fair, mutually accountable scheme, then it will have its own strict compliance rule, one component of which will presumably be to not litter in the park. Partial compliance rules with respect to littering will set out procedures for identifying litterbugs and imposing penalties on them. Given the inefficiency of these procedures, however, the residents also need to establish the clean up operation. So the clean up scheme is non-ideal *relative to the broader scheme of good neighbourliness in Blikkiesfontein*. But *relative to the neighbours considered as participant in the scheme*, the basic terms of their cooperation will be predicated in a strict compliance assumption. So it is non-ideal from an external perspective, but ideal from an internal perspective, so to speak.
One scheme of social cooperation can be nested inside another. And one scheme of cooperation can be nested inside another as a remedy for non-compliance with the rules of the larger scheme. The immense complexity of the relationship between the various schemes of cooperation in our day-to-day life is beyond easy description. Whether we describe a moral theory for a particular scheme as ideal or non-ideal is not itself something worth arguing too much about, I don’t think. What matters is whether it should ever make sense to require a strict compliance assumption at any stage in the moral justification of the rules of a scheme of cooperation. My answer is that it does, but not because we regard the scheme in question, governed by those rules, as our ultimate destination, one which we should hope to endure for all time. Instead, the strict compliance assumption makes sense insofar as we are committed to constructing the scheme in question, wherever it may be located in the broader network of human interaction, or in more or less pleasant circumstances, as a fair system of cooperation between mutually accountable participants.
Hi Simon
Could you say something about the status of directed duties in your argument? At present, it seems that they don’t play any necessary role. The central concept is that of standing to demand compliance and to sanction non-compliance, sanctions being suspensions or limitations of our default duties of cooperation. Footnote 5 distinguishes this idea of standing to demand performance / sanction non-performance from the idea of being owed a duty, which is just one possible basis of standing to demand / sanction (right?). It seems to me, however, that in a cooperative scheme (between equals who consider themselves accountable to each other), each of us owes compliance *to* others who comply with the scheme, not to those who don’t. Would you agree with that? If we didn’t owe these duties to each other – if, say, we just owed them to God, and God had delegated enforcement to us – then it wouldn’t be a cooperative scheme, would it? I see how you can get the priority of full compliance without invoking directed duties, but it seems to me that the case for a full compliance assumption doesn’t rest on the idea of fair cooperation unless we think of these duties as being owed to other cooperators.
Hi Andrew — I think you are right on both points, and that there is no tension here.
I distinguish between (a) the relation between the bearer of a duty and the person (or persons) to whom that duty is owed, and (b) the relation between the bearer of a duty and the person (or persons) who have the standing to demand the performance of the duty.
The directed duty in (a) is neither necessary nor (more controversially) conceptually sufficient, for the standing in (b). So we can have a scheme of social interaction where the participants all have standing to demand of each other that they each perform their prescribed duties, but where none of the duties are actually owed to any of the people in that scheme. (I think Feinberg’s Nowheresville is actually consistent with this, despite some unfortunate language in the article that suggests its denizens are not in the business of making demands. More on this in my currently unfinished paper on “Standing and Direction.”)
My argument is mainly predicated on the logic of moral demands — and so primarily the regulative rather than distributive role of the principles of justice, in the terminology of Section IV. Since having the standing to make a demand of another person necessarily involves a premise about one’s own duties, I think we are led through the chain of reasoning in Section III.1 of the paper to the specification of each person’s default duties of normal cooperation in accordance with a strict compliance rule. So far, nothing about directed duties enters the picture.
But an appeal to fairness is essential too, otherwise the strict compliance rule would simply be an artificial implication of a system of moral demands. So I appeal in particular to each participant’s normative interest in good standing to complete the argument. Here is where directed duties would naturally enter the picture. We can very plausibly say that we owe it to each other, when we cooperate with each other, to ensure that each person’s normative interest in good standing is appropriately protected (i.e., that there is a way for them to cooperate without breaking the rules). Otherwise the person in question would be wronged. This is what I think we should say, and it is one way in which genuinely *fair* cooperation does involve directed duties.
But I don’t think that that makes the concept of a directed duty essential to my argument in the paper, much as it is part of my overall view of fair cooperation. Suppose Suzy is a sceptic of the very idea of a directed duty, i.e., that one person could wrong another person by violating a duty. Suzy might still accept my account of the standing to press moral demands, and might still accept the normative interest in good standing as a constraint on justifiable terms of cooperation. I suppose that might be coherent, or at least I am not going to assume it isn’t. Even so, Suzy should still accept my argument for the strict compliance assumption, assuming it is cogent.
You also suggest, more specifically, that the participants owe it to each other to comply with the rules of the scheme, where this is more, I take it, than simply what they owe to each other in virtue of the fairness of the scheme’s rules. Again, I agree with that, although I don’t at present have a well-worked out view about what justifies that conclusion. So I think you are right that a full theory of fair cooperation will include this directed duty. But I don’t think the argument for the strict compliance assumption rests on all the elements of this full theory. So I think what passes for fair cooperation in a world without directed duties would still require a strict compliance assumption in the specification of its basic terms.
When you sever strict compliance ideal theory from utopian theory, you start seeing strict compliance rules everywhere. And then you run the risk of not seeing anything but strict compliance rules, which threatens to make the assumption an empty formalism. If so, I would be talking past sceptics of the SCA, as we separately understand it. Jon raises this possibility with his discussion of the scheme structured around two different sets of rules, R1 and R2. So I want to say something to keep a clear contrast with cases where the SCA is not appropriate, and how the justification of rules for such cases differs from genuinely cooperative schemes (again, in the normatively loaded sense discussed above).
Suppose you and I are charged with specifying speed limits for our city’s road network. We all travel on the roads, and we all have a very strong interest in making sure they are as safe and efficient as possible, and this in a fair way. The first point is that I do not think we should regard the collective activity of citizens using the road network as a genuinely cooperative scheme. It should be fair, in some sense, of course, so that motorcyclists are not exposed to great danger for the benefit of minibus drivers. But it would be a mistake to think of drivers, as such, as having the standing to demand of each other that they comply with the rules of the road. That’s a recipe for disaster. When you drive, you should switch off your Strawsonian reactive attitudes and regard the other vehicles on the road primarily as vaguely intelligent, vaguely purposive metallic threats to life and limb, and not as driven by one’s cooperating fellow motorists. The collective activity here is one that is best centrally coordinated, rather than structured as a cooperative scheme. The appropriate accountability relations are vertical rather than horizontal.
It is no accident that the question of strict compliance now falls away as redundant. For any proposed speed limit on a stretch of road, we can estimate some or other measure of compliance, with some or other distribution of speeds. This will give us an estimate of the number of accidents, and their severity, and the costs that this would impose on the efficiency of the road network as a whole, the costs of police enforcement, etc. For instance, a speed limit of 25 mph on a main arterial road might lead to widespread non-compliance, and a very broad range of speeds: some people observe the limit, some try to keep reasonably close to it, but many just ignore it as a dumb rule. This would be very dangerous, so it would be a mistake to go for 25 mph as the speed limit. At the other extreme, we could have a 85 mph speed limit that pretty much everyone can be expected to comply with. But that would be very dangerous itself, because people would simply be driving at very dangerous speeds, even if the range of speeds were narrower than would be the case at 25 mph. Somewhere in the middle, say a 40 mph limit, we may have a fair bit of non-compliance, but almost everyone might be expected to drive between, say 35 and 45 mph. That might be the optimal outcome: we want people driving between 35 and 45 mph, so we set the speed limit at 40 mph. If we set it at 45 mph, then they’d drive between 39 and 49 mph, which would be a bit less efficient. But we also don’t really want everyone driving at no more than 40 mph. That wouldn’t be very bad, perhaps, but it would be less efficient than the 35-45 mph range we’ve identified as optimal. People could really get to work a little quicker than that.
The point here is that with centrally coordinated activity, there is no reason to care about compliance except insofar as expected levels of compliance, combined with feasible levels of enforcement, affect the outcome. Adopting a set of rules with which we can expect greater compliance may lead to a better outcome, but it may also lead to a worse outcome. It’s just another variable to be varied.
I take it that this is more or less the lesson that consequentialist sceptics about the SCA want us to learn. And it’s a great lesson! — one that I fully endorse when it comes to the design of rules for collective endeavours that should be structured as centrally coordinated activities. These are all around us, combining in complicated ways with genuinely cooperative schemes characterized by relations of mutual accountability. It’s often hard to keep them distinct: is a rule requiring shoppers wear a face mask during a pandemic merely a vertical rule of coordination, or is it also one of the ephemeral terms of cooperation between the shoppers? (Or perhaps no rule at all, if you happen to live in Florida.)
In any event, I take it that the upshot of this is that consequentialist critics of the SCA do not merely wish us to take human frailties, the demandingness of rules, etc., into account when designing a set of rules. Even the proponents of the SCA should do that, for the most part at least. The difference is that the consequentialist critics think we should take these things into account because they affect the overall outcome of the activity. The proponent of the SCA, in contrast, thinks we should take them into account because they affect whether it is possible to satisfy each person’s interest in being a member in good standing of a cooperative scheme where the participants are mutually accountable to each other. It’s a (defeasible) constraint on the justifiability of the scheme’s rules, and not just a variable.
Simon’s replies very helpfully clarify his view. I just want to raise a couple of other points.
First, regarding the littering example, Simon opts for Option 1. He concedes that it may be odd to do so when the costs the scheme is designed to cooperatively address are being wrongfully created by some of the participants in the scheme, but in the end he suggests this doesn’t matter because the terms of the clean up scheme wouldn’t be any different if the costs were imposed by outsiders. But even if the rules or terms would be the same, there may still be an important difference to the way the participants can or should relate to one another. It seems plausible (to me) that they can’t relate to each other as free and equal participants in a fair scheme. They can’t do so because they are aware that some members of the group are wrongfully creating this burden. They aren’t sharing fairly in the burdens and benefits of the park clean up, since although all may be doing their fair share of the clean up activity, only some are doing their share of refraining from littering. It’s hard to avoid the thought that not littering is bound up with doing one’s share of keeping the park clean. If one of the illicit litterbugs is dropping as much litter as he’s cleaning up each week, there’s a clear sense in which he’s not really doing his fair share of the cleanup, even if he follows the rules and does clean the park when the rota says it’s his turn. I don’t see how all the participants can confidently and sincerely look each other in the eye and say “we’re all in this together, ready to do our part”. They could say this if the costs were created by people outside the scheme, or at least by people who are no longer non-compliers.
Second—in response to my case where we choose the less demanding R2 rules in anticipation that the R1 rules will provoke a lot of non-compliance—Simon suggests the scope of his view is more limited than maybe I understood. He draws a distinction between genuinely cooperative schemes, where SCA holds true, and centrally coordinated collective activities where (I take it) SCA need not hold.
I’m not sure I understand the distinction between genuinely cooperative schemes and centrally coordinated collective activities. The main difference that Simon emphasizes is that in the former the relations of accountability are horizontal, whereas in the latter these relations are vertical. But when is it appropriate for a scheme to be regulated by horizontal as opposed to vertical relations of accountability? Simon says that using the roads is not a genuinely cooperative scheme, and “it would be a mistake to think of drivers, as such, as having the standing to demand of each other that they comply with the rules of the road”. Not only do I find this very counterintuitive (my impression is that drivers are constantly holding each other accountable for such failures!), I also don’t see what really distinguishes this from the quiet car. We need rules to regulate the behavior of people in trains and, as in the case of roads, there are different considerations pulling in various directions (some people want to be social, others want to work quietly, others want a place to buy food and drink). One way to try and strike a balance is to have the rule about the quiet car—at least one car on the train is reserved for people who want a quiet space. This rule is not democratically enacted by the passengers, it is centrally imposed by the train company which is trying to balance the different interests of all the passengers, as well as their own interests. Similarly, the rules of the road aim to strike a balance between competing interests, and they are also imposed by a central authority rather than from the ground up (if anything, citizens are likely to have a greater say in the rules of the road than the rules on the train, at least in some countries). And in both cases it seems like those designing the rules probably ought to take predicted levels of non-compliance into account when deciding which scheme to implement (e.g. a rule that prohibited people from ever speaking or even typing on keyboards in the quiet coach would likely be ignored so systematically that it would quickly undermine the whole project of having a quiet coach).
So one issue is that I don’t yet have a clear grip on the distinction Simon is invoking. But the other issue is this: At the end of his comment Simon agrees that even on his view, we must take predictions about likely non-compliance into account when designing a set of rules (though on his view the reasons for doing so differ from a more purely consequentialist argument for doing so). But if we are taking estimates of non-compliance into account when deciding whether to implement R1 or R2, then it seems like there’s a sense in which we aren’t fully adopting a strict compliance assumption. To fully adopt a strict compliance assumption when building one’s theory is to ask what fairness or justice ideally requires of everyone, without any worries about whether everyone will comply. At least I am guessing that’s how some philosophers think about the strict compliance assumption.
Jon says we should understand “Do not litter yourself” as one of the rules of the Blikkiesfontein clean-up scheme. Thus, when litter appears, the participants in the scheme will realize that at least some of the others are not merely bad neighbours, but are also bad members of the clean-up crew. We might also supplement this thought with non-ideal rules, within the clean-up crew, for sanctioning participants who have been caught littering — the other members can demand they do an extra shift, say. All this in addition to whatever non-ideal rules that Blikkiesfontein as a whole might adopt to punish litterbugs wherever they might be coming from (e.g., a fine). These latter sanctions would not be imposed on individuals insofar as they violated a rule of the clean-up crew, but only insofar as they violated a neighbourhood ordinance.
I’m very sympathetic to the idea that when the group has to go out on a Saturday morning, yet again, the members will no doubt have a sense of disillusionment with each other, in the way that Jon suggests. The thought in my mind, however, is that all of their default duties of normal cooperation with each other, as members of the crew, remain in place. They don’t know who the litterbug is, so there’s no identifiable person whom they become ceteris paribus permitted to sanction, I take it.* So the problem of the litter may as well be exogenous, for all the difference it makes to how they are duty-bound to behave with regard to each other.
*I suppose there are different ways of organising a system of sanctions. Perhaps a randomly selected individual could be sanctioned pour encourager les autres. I would have to think more about what this meant for each participant’s standing to demand of the others that they comply with the rules of the scheme. It would hardly be a model of fair cooperation though.
Jon asks (i) when it is appropriate for the accountability relations in a social scheme to be vertical rather than horizontal. Adding to that question, (ii) how might we know whether a particular scheme’s accountability relations were one or the other?
I don’t have a general theory in answer to (i). I do believe that there is something non-instrumentally valuable about horizontal relations of accountability, such that we should prefer to organize our cooperation on this basis, other things being equal. But other things are not always equal, and it often makes sense, for different reasons, to either weaken or eliminate the horizontal accountability relations entirely. With respect to (ii), we often don’t know, and we frequently disagree about the contours of accountability relations in our day-to-day life. If I violate a duty, and people around me start calling me out for that, I might push back by disputing whether I really did have the duty in the first place. or whether my actions really did violate it, but I am just as likely, if not even more likely, to push back by challenging their standing to demand compliance. In so doing, I’m implicitly involving some or other normative conception of the terms of cooperation between us. This might be an entirely disingenuous move — half-baked and hypocritical — but it is often an entirely justifiable reaction to intrusively moralistic busy-bodies. It doesn’t matter if Bruno is right that Anne’s meddling in Guy’s romantic life is wrong; it’s none of his business.
I claim that we should regard the collective practice of driving in our respective vehicles on the road network as a centrally coordinated activity, rather than as a genuinely cooperative scheme. Jon finds this counterintuitive and notes that drivers frequently attempt to hold each other accountable for their violations of traffic rules. I agree about the latter claim, but regard it as a sad fact about how we often fail to discipline our conception of our moral relations wisely. There’s a structural similarity with moralism here — X mistakenly thinks he has standing to demand of Y that she phi — although the explanation of the mistake would I think be different in each case.
So what then is the difference between Traffic and Quiet Car? In a nutshell, the difference is that the participants in Traffic are encased in large, potentially lethal metallic objects and often attempt to communicate with one another by means of propelling these objects in unusual directions at unexpected speeds. This just makes things worse. The Strawsonian driver who wishes to express his indignation at the tailgating BMW regards one or other of his default duties of normal driving as suspended — and so we get brake checks, and bottlenecks, and road rage, and accidents, and people dying, all because someone nursed a reactive attitude instead of murdering it in its cradle.
Thus, if one were to design the moral rules for a scheme of social interaction from scratch, a very important question would be whether to link people’s duties together with the tripwires I describe in the paper. In Quiet Car, I think this makes sense. Giving people the standing to demand of each other that they stay quiet does not inexorably lead to a breakdown of the system. There’s a danger of course, and perhaps that gives us a reason to adopt a somewhat limited network of horizontal accountability. But the fellow passengers in Quiet Car can probably do a reasonably good job of enforcing the rule without chaos ensuing. It’s ok to take it upon oneself, as a fellow passenger, to police people like Anne by politely insisting they end their phone calls. So by all means, be a Strawsonian passenger! If that doesn’t work, then we may reach a point where it is best to just back away from the confrontation and leave it to the train staff. At this point, we adopt a kind of alienated perspective on the troublemaker–we’re no longer attempting to restore proper cooperation with them, but rather just observing them as a now-external disruption of the cooperation between the remaining passengers. So perhaps be only a *moderately* Strawsonian passenger.
My substantive view about Traffic is that this alienated perspective is one we should adopt from the get-go. We shouldn’t think of ourselves as policing traffic rules *at all*, even if it is very tempting to punish the speeding BMW by getting it stuck behind the truck in the right-hand lane. Resist that temptation! This is not a normal scheme of cooperation, where default duties should be understood as linked together by tripwires. That would be unnecessarily dangerous. So don’t be a Strawsonian driver at all!
Jon notes that in both Traffic and Quiet Car, the relevant rules originate in an act of imposition by a central authority. That much may be true, but it doesn’t imply that the relations of accountability for conformity with the rules are the same in both cases.
The imposition of a rule on a group of people involves the exercise of a normative power (or a bundle of normative powers for the strict Hohfeldian). We might have a dictatorial imposition, in which case the normative power is held by a single agent. Or we might have a democratic system of rule-making, in which case there is a complicated structure of normative power encompassing all of the enfranchised participants.
I understand accountability in cooperative schemes in terms of the standing to press demands, however, and I unpack that standing as a normative liability rather than as a normative power. Pressing a demand can sometimes involve the exercise of a normative power, but I do not think this is essential to it. Rather, I think that when I appropriately demand that Suzy phi, I am communicating to Suzy, in part, that if she does not phi, then I will become ceteris paribus permitted to do something adverse with respect to her. It is not that my demand (necessarily) changes her normative condition. Rather, her action would change my normative condition with regard to her. A demand is more like an ultimatum than an order.
So how a duty initially comes into existence is one question. And how the violation of that duty might affect those held by others is another question.
A distributed system of horizontal accountability is a natural complement to a democratic system of rule creation. And a narrow system of vertical accountability is a natural complement to a dictatorial system of rule creation. But the connection here is a contingent, substantive one.
I do think, of course, that a full specification of the terms of a genuine scheme of social cooperation will include a democratic system of rule making. But I don’t see that as part of the argument for the SCA.
Jon writes that:
“To fully adopt a strict compliance assumption when building one’s theory is to ask what fairness or justice ideally requires of everyone, without any worries about whether everyone will comply. At least I am guessing that’s how some philosophers think about the strict compliance assumption.”
I agree that some philosophers think we should ask what fairness or justice ideally requires of everyone, without worries about whether everyone will comply. This is not how I understand the role of the SCA though. Nor do I think it is how the SCA works in justice as fairness–although I imagine there would be disagreement about that.
On the last point, how should we understand the strains of commitment argument in Section 29 of Theory? Rawls is clear that because the parties must select a conception of justice on the basis of the SCA, they have to consider the strains of commitment (e.g., p. 126). Thus: “they cannot enter into agreements that may have consequences that they cannot accept” (p. 153). But if we set aside worries about whether everyone will comply with a conception of justice, and its realized system of institutional rules, when determining what justice requires of us, then why make the choice between conceptions depend, even in part, on consideration of the strains of commitment? That would be a purely downstream problem about how likely it would be for us to realize what justice, independently specified, requires of us.
Since this point about the strains of commitment argument is a matter of Rawls interpretation, I accept that there can be a number of other interpretations. I don’t mean to argue against other interpretations, but it is a constraint on the theory of justice that I find intuitive and plausible, whether or not I’m understanding what Rawls had in mind properly.
Setting Rawls aside, there is now of course a substantial literature on feasibility in a theory of justice, with a number of people rejecting the idea that feasibility constrains principles of justice. I don’t engage with this literature in the paper at all, since I came to the conclusion that my argument from horizontal accountability to the SCA should work whatever position one adopts on feasibility. I may be mistaken about that (in which case that paper should have been much longer than it already is). I am certainly happy to appeal to feasibility considerations in explaining why we should have one scheme of cooperation, with one system of accountability, rather than another in my explanation of what a strict compliance rules looks like. If the question is whether principles of justice, in particular, should be similarly constrained, I am happy to suppose that they should be, but don’t at present have much of an argument that they must be.