We are pleased to provide the first of two threads discussing the recent articles in Ethics on various kinds of reasons (right, wrong, state-given, object-given). Both are introduced and critically discussed by Wlodek Rabinowicz and Toni Ronnow-Rasmussen. The first is on Mark Schroeder's "The Ubiquity of State-Given Reasons," beginning below the fold, and the second (which will assume some exposition from the first) is to follow. We invite you to join in on the discussion.
Mark Schroeder’s paper offers an ingenious and novel argument against the so-called “object-given/state-given” theory (OST). This theory claims that the distinction between the right and the wrong kind of reasons for attitudes coincides with the distinction between ‘object-given’ and ‘state-given’ reasons. The contrast between epistemic and pragmatic reasons for belief is viewed by the theory as a paradigm of this latter distinction.
But, according to Schroeder, OST is an incorrect view. It shows its weakness when we consider reasons against attitudes. Intuitively, there can be state-given right-kind reasons not to intend or not to believe. For example, it may be rational for an agent not (yet) to intend to do A if he has more urgent matters to attend to at present or if he soon expects to receive relevant information concerning A. Such reasons for not intending are intuitively state-given, as they have to do with the disvalue of forming an intention at a given time and not with the disvalue of the intention’s object—action A, but—as Schroeder argues—they are nonetheless reasons of the right kind. Similar examples exist for beliefs. In particular, if further evidence concerning a proposition that p is known to be forthcoming, this can be a reason for not (yet) believing that p: a reason of the right kind that intuitively is state-given.
That the reasons mentioned are in fact of the right kind is shown according to Schroeder by the fact that they have the “earmarks” of such reasons. He introduces four earmarks of the right-kind/wrong-kind distinction, which are drawn from the consideration of the paradigm case: from the contrast between the epistemic and the pragmatic reasons for belief. The first earmark—the “asymmetry of motivation”—highlights that it is much harder to believe a proposition on the basis of pragmatic reasons than on the basis of the epistemic ones. (Cf. Jonathan Way’s first ‘common mark’ of the distinction between right- and wrong-kind reasons in his “Transmission and the Wrong Kind of Reason”, in the same issue of Ethics.) The second earmark—the “asymmetric effects on rationality”—concerns the fact that while epistemic reasons make it rational to believe a proposition, as far as the distinctive rationality criteria for belief go, this is not the case with pragmatic reasons. (Even if Pascal had been right in his suggestion that belief in God maximizes expected utility, this consideration wouldn’t make that belief more rational, epistemically speaking.) As to the third earmark (which corresponds to Way’s second “common mark”), Schroeder notes that epistemic and pragmatic reasons bear differently on truth, which is the standard of correctness for belief: The former are conducive to our having true beliefs; the latter are not. (Given that the standards of correctness for other attitudes, and in particular for intentions, are less clear, Schroeder plays down this earmark in his paper.) The fourth and final earmark refers to the “recognizable ‘flavor’” of pragmatic reasons “that makes them feel intuitively like reasons for other attitudes that exhibit some of the other characteristics of pragmatic reasons for belief.” (p. 460) This is “arguably not a proper earmark in its own right, but just the report of the intuitive naturalness of classifying various putatively wrong-kind reasons together.” (ibid., fn. 3)
The examples of reasons not to intend or not to believe that Schroeder provides exhibit according to him both the first and the second earmark of right-kind reasons and they lack the characteristic flavor of the reasons of the wrong kind. In addition, in the case of belief, that new evidence concerning p is forthcoming is a reason not (yet) to believe p that exhibits the third earmark of right-kind reasons.
Schroeder goes further: He suggests that there can be state-given right-kind reasons for attitudes and not just such reasons against them. This holds at least for intentions (though not for beliefs). An agent’s goal to coordinate his actions with those of other agents might be a reason for him to intend to do A, despite the fact that some relevant information concerning A still is forthcoming. This reason is state-given (as it has to do with the benefits of having an intention concerning A) but it is has the earmarks of a reason of the right kind: At least in certain circumstances, it can be easy to form an intention to do A in part on the basis of that reason and it may well be a rational thing to do. In addition, in Schroeder’s view, the reason in question lacks the characteristic flavor of the reasons of the wrong kind. This claim, when it comes in the paper, is not easy to understand, but the explanation is provided by Schroeder in the last section.
There, Schroeder outlines a substantive account of the right-kind/wrong-kind of reasons distinction. On this account, all reasons for and against attitudes consist in the benefits and costs of the attitudes in question. (This means that, in a sense, all reasons for attitudes can be said to be ultimately state-given. So-called ‘object-given’ reasons are, in this sense, only a special subclass of state-given reasons.) To the extent, however, that the benefits and costs of an attitude are typical for the kind of an attitude that is in question, that is, to the extent they directly relate to the distinctive nature or the point of this kind of an attitude, they can be said to be reasons of the right kind. For example, the point of intending seems to be to close off practical deliberation “in order to allow us to coordinate and control our own actions across time [or in order to coordinate our own actions with the actions of other agents] and make decisions at times at which we have more available cognitive resources.” (p. 483) This explains why the examples of reasons against and for intentions that Schroeder highlights in his paper are cases of reasons of the right kind. Similarly, the distinctive role of belief seems to be to close off uncertainty, so that “we have something to rely on, in reasoning.” (p. 484) Since there are benefits if we in reasoning rely on truth, this explains why evidence is a reason of the right kind. But it also explains why the fact that further evidence is forthcoming can be a right-kind reason to postpone making up one’s mind.
Moving now to our questions, there is one thing it would be good to learn more about: What exactly is the relation between the earmarks and the distinction they are supposed to mark out? A passage in the paper suggests that the earmarks define the distinction: “If these considerations bear all of the marks of right-kind reasons, they are right-kind reasons—after all, the ‘right-kind’/‘wrong kind’ distinction was just a catch-all label designed to cover an important class of differences that arise in a variety of domains.” (p. 466) At the same time, as we have seen, Schroeder presents an account of that distinction in terms of the characteristic roles or functions of attitudes. How does the latter account relate to the earmark approach? A possible suggestion might be that conceptually speaking the distinction boils down to the earmarks, while the account in terms of the characteristic attitudinal roles and functions is a substantive theory about the phenomena marked out by the concepts of right- and wrong-kind reasons that the earmarks define. On this interpretation, the substantive account is a ‘real definition’ of the distinction under consideration. Is this how Schroeder views the matter? It would be interesting to know the answer.
Our second question concerns reasons that exhibit a mixed earmark pattern: They possess some earmarks of right-kind reasons but at the same time they have the flavor of a wrong-kind reasons. To illustrate, let us first consider Schroeder’s example with an agent who is being paid for intending not to drive to LA. His reason to drive to LA would be to visit his brother, but though he thinks it is very likely that his brother will be there, the remaining uncertainty makes the trip unattractive in view of the expected traffic jams. The brother is going to call the agent in the evening and resolve his uncertainty one way or the other, most probably by letting him know that he will be in LA. But the agent would get a financial reward if he were to form an intention now, in advance, not to drive to LA. The offer of a reward is clearly a wrong-kind of reason in this example. The case strongly reminds of Kavka’s toxin puzzle: It is hard to form an intention not to drive to LA if one believes it is very likely that one will find this course of action to be irrational when the time comes. Let us, however, change the example a little and suppose that the agent considers it to be unlikely that his brother will be in LA. If the example is modified in this way, it loses its similarity to Kavka’s puzzle. Now it is quite easy for the agent, if there is money to be gained, to form an intention in advance not to drive to LA. It is easy, because the agent thinks it unlikely that he will later find this course of action irrational. Thus, the offer of a financial reward has one of the earmarks of the right-kind reasons. At the same time, it has a typical flavor of the wrong-kind reasons: The flavor of being paid for intending appears to be exactly the same in the modified example as in Schroeder’s original one. Also, it appears that the offered reward has another earmark of wrong-kind reasons: The intention not to drive is not made more rational by a promised payment, as long as it is the intention and not the action that the agent is being paid for.
So, what kind of reason do we have here: the wrong kind or the right kind? Or perhaps neither? One might try to decide this issue by going to Schroeder’s substantial account. On the latter, a financial reward for intending comes out as the wrong kind of reason: It is a benefit that doesn’t relate to any of the characteristic roles or functions of intending—to its role as deliberation closure or its function to coordinate actions. This suggests that perhaps there is a disparity between the conceptual distinction, as fixed by the earmarks, and the substantive account. Should one then modify the latter, or rather revise the earmark approach?
The third question exclusively focuses on the substantive account. Suppose the agent is offered a financial reward if he closes off his deliberation concerning some practical issue at hand. Now, the most natural way to do close off deliberation is to form an intention with respect to the matter under consideration. In this case, it seems that the benefit of intending does relate to the characteristic point of intention—to its role in closing off deliberation. But, intuitively, the reward attached to the closure of deliberation seems to be an instance of the wrong kind of reason for intending (just as in the case when the agent is rewarded for intending as such, one way or another). So, one might wonder whether Schroeder’s substantive account can be correct as it stands.
The last question concerns the relation between Schroeder’s proposal and the Wrong Kind of Reasons problem (the WKR Problem), which has been much discussed, not least by Schroeder himself, in connection with the fitting-attitudes account of value. On that account, in its well-known ‘buck-passing’ version put forward by Scanlon, an object is valuable to the extent that there are reasons to favour it. The right kind of reasons. Intuitively, such reasons are provided by the value-making properties of the object in question, even though this intuitive explication is unavailable to the fitting-attitude value analysts themselves, on pain of circularity. That one would be paid for favouring or that favouring would itself be of value might be seen as reasons for favouring, but such reasons are of the wrong kind from the point of the buck-passing account. The WKR Problem comes down to the question whether the relevant distinction between the right and the wrong kind of reasons can be defined in a non-circular way, without presupposing the very notion of value that is to be analyzed.
Now, what we wonder is whether Schroeder’s proposal can be of help in this enterprise. Does it give us the non-circular definition we have been after? We are not sure, but we doubt it. Schroeder’s substantive account of the distinction relies on the notion of benefits and costs of an attitude. To that extent, it presupposes some value notions as given. As for his earmark approach to the distinction, there is a comparable problem. One of the earmarks, the third one, appeals to the standard of correctness for an attitude under consideration. For beliefs, this standard is the truth of the proposition that is the object of belief, but what about favourings? A natural answer seems to be that favouring is correct to the extent that its object is valuable. But then the notion of value is being brought in into the very formulation of the earmarks. To be sure, Schroeder plays down the role of the third earmark in his paper, but, if we understand him correctly, this is only because he finds it unclear what the standard of correctness for intentions might be. The same problem does not arise for favourings. It seems then that the earmark approach does not provide a non-circular solution to the WKR problem.
So here’s question I have. The summary of Mark’s paper by our commentators says:
He suggests that there can be state-given right-kind reasons for attitudes and not just such reasons against them. This holds at least for intentions (though not for beliefs).
I’m wondering why the Schroeder account doesn’t allow for state-given reasons for belief. Here’s what I think might be an example. I’m aware that there is research that shows that in person interviews give one evidence about job candidates, but that people tend to weight it to heavily. I could imagine thinking that this made it rational to form one’s beliefs about which candidate is best, before the interview stage. And the reason this is a good idea is that the settled nature of belief might make any subsequent beliefs less influenced by evidence one might otherwise overweight. It isn’t that you couldn’t change your mind in the face of really strong evidence, it is just you’d be less likely to form a false belief on the basis of weak but inconclusive evidence that you put too much weight on.
In the way I imagine the case, it is the epistemic value of having the belief one does that gives one a reason to form it now rather than waiting to make up one’s mind. So it seems to me to be the kind of thing one might mean if you say the right kinds of reasons must relate to the distinctive nature or point of belief. And leaving the positive theoretical proposal aside for a moment, it also intuitively seems to me to be a right kind of reason for belief.
I can imagine someone objecting that it can’t be of the right sort because it can’t be your whole reason to believe what you do — you’d also have to have evidence for it. But I’m thinking reasons don’t have to be your whole justification to be reasons, they only have to be part of the story of why, together with other applicable reasons, you are justified in doing what they favor.
I throw this out without having gone back to check whether there’s an answer in Mark’s paper that is the subject of the discussion. Since this is a blog discussion I take it that’s OK. Anyway, I was wondering about this.
Many thanks to Mark Schroeder, for his paper, and to Wlodek Rabinowicz and Toni Ronnow-Rasmussen, for their précis.
Since Schroeder’s paper takes my own view as one of its targets, I have spent some time working through it. In the process, I ended up writing a paper. I’ve posted a draft of that paper here, for those who are interested (comments on it are welcome, citations on it are as yet incomplete).
My main concern with Schroeder’s argument connects to the first question raised by Rabinowicz and Ronnow-Rasmussen. I, too, am concerned with Schroeder’s reliance on earmarks. I suspect it is a fatal flaw in Schroeder’s argumentative strategy.
The basic structure of Schroeder’s argument is as follows: He first collects four “earmarks” of the distinction between reasons of the right and wrong kind. He then locates cases in which those earmarks are present, but in which they distinguish among reasons that are not object-given. He concludes that any account of the right and wrong kind of reason that relies on the object-/state-given distinction is incorrect. (The actual argument is more complex. I treat it in more detail in the paper linked above.)
As Schroeder acknowledges, the argument relies on the claim that “if [the reasons that appear in his cases] bear all of the marks of right-kind reasons, they are right-kind reasons.” (p. 466) (He elaborates, “after all, the ‘right-kind’/‘wrong-kind’ distinction was just a catch-all label designed to cover an important class of differences that arise in a variety of domains.” [p. 466]) This is an instance, I take it, of what Schroeder later calls “a key methodological principle: if it quacks like a duck, it’s a duck.” (p. 480) This principle appears in other places in Schroeder’s work. I am surprised whenever I met with it. It is not true. Even of ducks.
At the beginning of an inquiry, we may use a word as a catch-all label for a set of earmarks. At that stage, we may have no better way of identifying ducks, or kinds of reasons, than by the earmarks. (We might then assume, justifiably but defeasibly, that all that quacks is a duck.) However, by the end of the inquiry, we hope to have an account. If the account is a good one, it may allow us to discount certain apparent cases as only apparent—the account may force a reclassification of things that, we admit, bear the earmarks. (Once we have an account of what it is to be a duck, we might deny the classification to certain geese, or to robots at Disneyland, while admitting that they quack just like ducks.)
So, in a nutshell, the flaw I think I see in the argument is that, once an account of some phenomenon has been proposed, we cannot simply rely on the initial earmarks of the phenomenon, the original symptoms that guided inquiry, to claim that the account is incorrect. Accounts, if they are good, show what it is that explains the earmarks. With an account in hand, we often come to see that certain cases in which the earmarks appear are, in fact, not cases of the thing for which we have provided an account. The account, if otherwise good, can force a reclassification of such cases. So the fact that other things quack is, in itself, no criticism. That other things quack would be a criticism only if the account itself either asserts that what it is to be a duck is to quack or else somehow implies that all quacking things are ducks. To determine that, we need to look at the details of the account. We cannot depend on the initial earmarks.
In the paper linked above (and here), I re-present the details of my own that account (from “The Wrong Kind of Reason,” JPhil 2005) and consider how it will handle the cases Schroeder presents. I argue that it can handle the case that may seem to cause trouble for it—that, even though the reasons in that case may quack, they are still the wrong kind of reason.
Schroeder’s other cases are interesting and challenging, and I spend some time on them as well. At the end of the paper, I reflect a bit on the very different approach to reasons taken by Schroeder and myself. I think the most interesting issues lie there.
Wow – thanks to Toni and Wlodek for getting this discussion started, to Mark, and especially to Pamela for engaging so seriously with my paper! I’ll need to digest her full paper over the next 24 hours (or much longer), but let me try to start to address some of the main questions so far. Hopefully my remarks will make even more transparent than it already is where the holes in my paper are, so that others will continue to jump in to the discussion.
Three out of Wlodek and Toni’s four questions, as well as Pamela’s response, are related to my ‘earmark’ strategy for keeping track of where it looks like we can find something that looks like the same kind of thing that the distinction between ‘right’ and ‘wrong’ kinds of reason was supposed to track. Wlodek and Toni start by asking for clarification about the role of these earmarks, and end by suggesting that I might think they are so strongly connected to the phenomenon that they mark out, that they might serve as an analysis of it. In between, they introduce a case that starts to test the neatness of how the earmarks line up. And Pamela, of course, suggests that earmarks cannot do nearly as much for me as I would like.
The role that earmarks are supposed to play is supposed to be much more like Pamela describes than like Wlodek and Toni’s remarks suggest. They don’t and are not intended to provide any sort of analysis, and I was even deliberately vague about exactly what their standards of application are, relying on the hope that the cases about which I needed judgments are clear enough, regardless of how some other cases are resolved. For example, in my ‘asymmetry of motivation’ earmark, I was careful to avoid saying exactly what sort or how strong an asymmetry in motivation is involved. So I hope that helps to begin to addres Wlodek and Toni’s first question. Moreover, in keeping with the spirit of this approach, I’d also like to adopt some of Pamela’s remarks about the limits of earmarks in response to Wlodek and Toni’s second question.
That leaves me with the question of why I see my cases as ones that should push us to realize that what was most interesting about the right/wrong distinction is a broader phenomenon, rather than as cases that should be seen as casualties of fortune to be explained away, once we realize that they are not captured by our account. Pamela rightly holds me to task for the literal truth of “if it quacks like a duck, it’s a duck” – after all, some things that quack like a duck are mechanical hunting decoys. But I do think that my cases put pressure on us to be seriously asking: why isn’t the most fundamental, interesting thing that the earmarks initially led us to notice and of which we originaly wanted to give an account, present in these cases? Isn’t there something interesting and important here that all of these cases share? And isn’t it important when we want to know which reasons bear on the rationality of these attitudes?
I hope to try to say more about this later, if there is sufficient discussion, and I’ll comment about Wlodek and Toni’s third and fourth questions tomorrow. But I should close by saying that I particularly like Mark van Roojen’s example. I began with the idea that there would be right-kind state-given reasons for belief as well as against it (I rely on this idea in a paper forthcoming in Phil Studies), but I talked myself out of this after worrying that with belief, unlike with intention, it isn’t possible to ‘force the question’, because you can always act on the basis of your subjective probabilities, even if you remain ultimately agnostic in binary terms. However, Mark presents us with a case in which the question is forced not because the time to act has come, but because we have strong evidence that our future selves are not to be trusted. It’s a really great case, and I’m strongly tempted to agree with him about it.
I want to float a worry about overgeneralizing on a small set of cases. Mark has this worry when one identifies the right-wrong distinction and the object given-state given distinction. I’m wondering if his argument overgeneralizes when denying the identity. Mark’s cases are about the rationality or the correctness of beliefs and intentions, and from that we are to conclude that the two distinctions above are not the same. To support that conclusion, though, I think we’d like two more things. First, we’d like cases analogous to Mark’s that address state-given reasons that are evaluative-status-making (e.g., concerning the desirable, the admirable, etc.), and other statuses where we are tempted to analyze them in terms of right reasons. Maybe those are to be found elsewhere, but I think Mark needs them for the argument (or would like them, just to avoid explaining why his examples only come up for certain statuses), and they strike me as much more controversial. For example, I doubt there are state given reasons not to be amused by a joke that help to make the joke genuinely unfunny.
Another thing we’d need is some motivation for analyzing correctness or rationality in terms of reasons. I think one main attraction to fitting attitude analyses for things like admirability is that “x is admirable” entails “someone has reason to admire x”. I doubt there are similar entailments from “x is [the] correct [way to tie a knot]” to “someone has reason to [tie a knot] x[ly]” or from “belief B is [would be?] correct [true?]” to “someone has reason to have B.” There are analyses of rationality that are not given in terms of genuine reasons (as opposed to apparent reasons, e.g. – maybe we need to distinguish senses of rationality here) that I at least find plausible, and if the correctness of belief has to do with truth it’s not clear how this will connect up to reasons.
Basically, I’m floating the idea that the wrong kind of reasons problem does not arise for the statuses Mark discusses because a reason-based analysis of those statuses is not the way to go. Maybe Mark’s cases plus the plausibility of identifying the right-wrong distinction with the state given-object given distinction in a broad range of other cases further pushes us in this direction? That is not to say there are not interesting distinctions to make concerning reasons to believe and reasons to intend that Mark has brought to light with cases. Just that they might not bear on wrong-right distinctions needed to analyze certain evaluative and normative statuses in terms of reasons.
I’m pretty sympathetic to Mark’s main claim – that there can be right-kind reasons against belief and intention which are not object-given. But I have a question about the suggestion that there can also be right-kind reasons for intention which are not object-given. Mark’s suggestion is that, e.g. the fact that you need to coordinate with your wife about who will have the car tomorrow is a reason to make up your mind about whether to go to LA, and thus a reason to intend to go to LA. But I’m not sure why the last bit follows. Why not think that the need to co-ordinate is a reason (of the right kind) against having neither intention, and so perhaps a reason to [intend to go to LA or intend not to go to LA] but not a reason to intend to go to LA? It seems like this suggestion would be enough to explain why this consideration makes it rational to intend to go to LA (since having neither intention is an alternative to intending to go to LA). But it wouldn’t require us to give up the idea that all right-kind reasons to intend to A are reasons to A. (A similar move could be made in response to Mark van Roojen’s example).
Matt – I don’t think your standard of evidence for whether something should be analyzed in terms of reasons is adequate. Take ‘good torturer’. I don’t think the fact that someone is a good torturer entails in and of itself that anyone has a reason to have any positive attitude toward her, but I think even the attributive ‘good’ should be analyzed in terms of reasons. The analysis just shouldn’t entail that there needs to be anyone with those reasons – it could say, for example, that there is a reason for anyone who is in the market for a torturer to select her. That only entails that anyone has the requisite reasons, assuming that there is in fact someone in the market for a torturer (which there may not be).
Jonathan – it’s true that the reason in question counts equally in favor of both intentions. And I don’t have any examples that aren’t like this, so I agree that you can’t get right-kind reasons for intention or belief willy-nilly. If we adopt contrastivism about reasons, we could say that it is a reason to intend to go rather than have no intention, but not a reason to intend to go rather than intend not to go. But I think there is a natural sense in which this reason really does count in favor of each intention, because in serving to make the state of either-intending-to-go-or-intending-not-to-go more rational, it has to also make each intention more rational.
Thanks, Mark,
I didn’t mean to suggest a standard of evidence for reasons analyses. Existential reason entailment is one big motivation for me to analyze something in terms of reasons (with supporting considerations), but there could be others. Are you suggesting entailment of conditional reason claims as an alternative? I’m not sure about the example – I don’t think that anyone in the market for a torturer has some reason to pick a good one, but maybe clarification on being in the market for something would help. That might get into your other work.
Example aside, then, I’m worried more generally that entailment of conditional reason claims is too inclusive a standard, even conditionals concerning people in the market for things. I suppose the fact that x is the only sharp knife entails that anyone who is “in the market for” a sharp knife has reason to select x (is that right?). But that does not seem to be a motivation to analyze the property of being the only sharp knife in terms of reasons. As you might guess, I have similar views on attributive uses of ‘good.’ I’m not tempted to give reason-based analyses because they entail no existential reason claims and I don’t yet see another reason for reason-based analyses here.
Assuming I’m wrong about the above, there is this residual worry I have if rationality and correctness are the only examples we use for state-given reasons. Why shouldn’t there be similar examples for evaluative properties that are most tempting candidates for reason-based analyses and where state-object distinctions seem to do pretty well to divide wrong and right reasons? Maybe you think such examples discussed by others are good?
Matt,
I don’t think entailing conditionals about reasons is grounds to analyze something in terms of reasons, but I do think that uniformity in treatment of ‘good’ and other evaluatives, as well as general grounds for thinking that what all normative properties and relations have in common is that they involve reasons, are grounds to seek to analyze attributive good in terms of reasons.
The question about whether there are state-given right-kind reasons against admiration, desire, or fear is a good one, as well as that of why not, if not. I don’t think I have a good answer to give you, offhand, but I would start, thinking within my view, but trying to think about the rational role of those attitudes. If both the costs and benefits of admiring Jackie turn essentially on features of Jackie, that is the kind of thing that would lead us to expect, on my view, that right-kind reasons both for and against admiring Jackie would have to somehow mention or relate to Jackie, and hence explain what I think you want to explain. Intention is different because some of the costs and benefits turn on our need to have some things settled (in order to act and plan around those things) while leaving other things open.
I should still weigh in on Wlodek and Toni’s third and fourth questions. Let me start with the third, which is designed to push on the sketch in the direction of a positive account of the right/wrong distinction that I give in the paper. I should start by clarifying that I don’t think that the remarks that I give in the paper suffice for a full-fledged account. I don’t think they are precise enough to take a stand on important possible distinctions, so I take the remarks to be more of a gesture in the direction of the kind of account that I think must be right, rather than the articulation of a particular such account.
The main reason for this is that I did, in fact, defend a particular account of the right/wrong distinction, both in “Value and the Wrong Kind of Reason” and (in a slightly earlier and less adequate version) in chapter 7 of Slaves of the Passions. I no longer think that this particular account works, but I do think that it belongs to the family of views that I still take to be correct – views that tie right-kind reasons for and against each attitude to the distinctive nature of that kind of attitude. In the course of writing “The Ubiquity of State-Given Reasons”, I at one time thought that I had a replacement account that would do the job, but someone – I believe it was Pamela, at a conference in Austin – convinced me that it didn’t fit the example I use in the paper of deciding where to go to grad school at a time when more current matters are pressing. So I gave up, for now, on defending an exact account, in favor of pointing to which broad sort of account I think needs to be right.
(It’s worth pointing out, in passing, that although I took Pamela’s account to fall under the scope of views which might have a problem with my argument in the paper, it does share with the family of views where I think correct account is going to be found, the insight that the right kind of reasons for each attitude are going to turn on something about the nature of that attitude.)
In any case, my original account was explicitly designed to deal with examples like Wlodek and Toni’s. The idea of that account was that right-kind reasons are relative to an activity, and with respect to any activity, the right-kind reasons are the ones that are distinctive of that activity. And on my original idea, to be a distinctive reason of some activity, is to be a reason that would be shared by necessarily anyone engaged in that activity. This rules out cases like Wlodek and Toni’s, but is too strong. In “Value and..” I weakened the condition, but weakened it too much, for reasons that will be obvious to readers of Jonathan Way’s paper.
I still think that right-kind reasons are relative to an activity, and that they are the reasons that are distinctive of the activity. But I now think that looking at who shares the reasons is too coarse-grained to distinguish the right kind from the wrong. Yet the nature of the activity has to somehow tell us which reasons are of the right kind, and which are of the wrong – the right kind must somehow be more intimately connected to the distinctive nature of that activity. Or, since we are talking about attitudes, the right-kind reasons must be more intimately connected to the nature or role of each attitude. It’s clear that evidential reasons for belief are more closely connected to the role of belief than Pascalian considerations are, and I believe the same goes for stakes-related reasons against belief. I also think it’s clear that ordinary reasons for intention are more closely connected to the role of intention than Kavkaian reasons are, and I think the same thing goes for forthcoming information as a reason against intention. Finally, I think the reason that it is so clear that Wlodek and Toni’s example involves a wrong-kind reason for intention, is that it is clear that it is not sufficiently closely related to the role of intention. Now, I can’t put my finger on exactly why not, but that’s where I’m stuck, right now.
Last, let me say something about Wlodek and Toni’s last question, which is whether my talk about ‘costs’ and ‘benefits’ in the paper means that I can’t defend a Fitting Attitudes account of value concepts like ‘good’, ‘admirable’, ‘praiseworthy’, and so on. The answer is that I don’t think it does; perhaps ‘cost’ and ‘benefit’ weren’t the best words for me to use, but I don’t think that we should want to give FA-style accounts of everything that has the sort of evaluative tinge that allows us to recognize it as involving either a ‘plus’ or a ‘minus’. After all, there are both reasons for and reasons against, and we’re not going to give a FA account of reasons in terms of reasons. I know that my remarks in the final part of my paper are suggestive rather than well-spelled-out, but I would want the details to be filled in in such a way as to only appeal, when strictly construed, to such notions. Long live Fitting Attitudes!
Thanks Mark,
for some really clarifying comments. I am now off to the ASSOS conference.
Mark’s cases are very interesting. As Justin has pointed out in his comment in the Way thread, we’ve long been concerned with some interesting cases of a different sort, which we think are WKRs but are controversial. We’re both puzzling over what to make of these new Schroeder-reasons: reasons to make up one’s mind (or not to), which Mark thinks bear all the earmarks of RKRs but are not object-given and, hence, do not bear on the underlying judgment of what to do or what is the case — which one might have thought was the essence of the distinction between right and wrong kinds of reason.
But we’ve got some questions about Schroeder-reasons. In particular, we think it is not so clear that his reasons against intention behave differently from obvious WKRs such as demonic incentives with respect to whether one can follow them. That is, we’re not yet persuaded of what Mark calls motivational asymmetry. That is only one of his earmarks, but it seems most important, since the claims that his reasons bear any of the other earmarks depend on some intuitions about rationality and flavor that we may not share, and Mark grants that the correctness earmark is controversial. So if Schroeder-reasons behave like WKRs with respect to motivational efficacy, that would be grounds for thinking that they are not RKRs for (or against) intention after all. (Instead, perhaps, they are WKRS against intending, or perhaps they are not reasons against any particular intentions about going or staying at all, but only reasons not to make up one’s mind now.) We will focus on the intention cases and the reasons not to make up your mind, but we think that the same point applies to the reasons to make up your mind now, and to belief as well as intention.
Here’s why they seem to behave like incentives. Schroeder-reasons seem to depend crucially on the comparative weights of the object-given reasons (with one caveat, to be discussed). To see this, let’s differentiate between close calls and no-brainers. Close calls are decisions where real deliberation needs to occur in order to figure out what you have most reason to do; no-brainers are cases where as soon as you understand the scenario, you see that you have more reason to do A than B.
Consider Mark’s second kind of case for not making up your mind: that there are more pressing matters (the alarm, etc.) that must be attended to immediately. In his example, which graduate program to attend is a close call. If it had been a no-brainer — say a decision between Old Ivy and Mediocre State — then as soon as you took in the choice situation, your mind would be made up. Which means that despite the pressing matters you need to attend to right away, you couldn’t put off making up your mind — if only because in an obvious sense there would be no (mental) act of making up your mind necessary or even possible; your mind would be made up as soon as you took in the choice situation. These cases behave just like classic WKRs, it seems: you can’t decide not to make up your mind, precisely because it’s obvious what you have most (object-given) reason to do. But if it’s not a no-brainer, then you can postpone deliberation either for Schroeder-reasons or for the reasons given by incentives.
Mark’s first class of cases, involving future information, are trickier. Here the previously mentioned caveat arises. Even when the object-given reasons to do A or B is not a close call, if you think that you will be getting information that may be so weighty as to swing the balance in the other direction, then you can (and rationally should) postpone making up your mind. What this case has in common with close calls is that some future mental act (either of deliberation or immediate change of mind) will be required; that makes it possible to refrain from making up your mind. So the disanalogy with WKRs on motivational asymmetry does not seem to apply to close calls or swing the balance cases. Whenever deliberation about what to do is needed, it seems that one can also follow WKRs to not making up your mind, and that it is rational to do so when the incentive is sufficiently large.
So we’re left puzzling about Mark’s fascinating and important sort of cases, but the issues raised are complex and we’d like to hear more from Mark about these points.
Dan (and Justin, I take it) seem to be offering different challenges to each of the kinds of example that I offer in the paper. This is helpful, because the second sort of example feels a little different from the first, to me, and I couldn’t make it fit with the initial theory of the distinction that I was toying with for a while.
The challenge about my cases which involve current cognitive demands is this, as I understand it: every case is either a close call or a no-brainer. In no-brainer cases, there is no motivational asymmetry, because intention can never be put off either for reasons of cognitive demands or for demon-rewards, and in close call cases, there is no motivational asymmetry, because intention can easily be put off for both. The objection is that when I compared my cognitive demand case to a demon-reward-type case in the paper, I cheated by comparing a close call case to a no-brainer case, and so I wasn’t really comparing apples to apples. This is a sharp point, and I’m not sure that I’m not guilty of having been persuaded by cases with precisely this feature.
However, I don’t quite understand what the challenge for forthcoming information cases is supposed to be. Dan says, “if you think that you will be getting information that may be so weighty as to swing the balance in the other direction, then you can (and rationally should) postpone making up your mind.” As a generalization, I think this is false. Sometimes, even when you expect to get information that may swing the balance, you should make up your mind anyway, because you have other decisions to make which require a settled answer as to the decision in question. Then Dan says something that I don’t follow about how this is analogous to ‘close call’ cases, and concludes: “So the disanalogy with WRKs on motivational asymmetry does not seem to apply to close calls or swing the balance cases”. The accusation seems to be that cases in which further information is forthcoming – my kind of cases – are by their very nature just like close call cases, and again I did not compare apples to apples when I compared them to my no-brainer demon-reward-type case.
But I don’t see why we should think this is right. Future information cases can _be_ “no-brainer” cases. So if we’re considering a future information case that is also a no-brainer case, we can compare the effects of demon-reward-type incentives to those of my kinds of reasons. An asymmetry of motivation that is exhibited in the very same case can’t be triaged by classification of cases. In the paper, I consider a case in which I am 80% confident that my brother will be in LA, but that 80% chance is not worth the 100% chance of traffic, and I know that he will tell me later whether he will be there. I claim that the future information is a RKR against intending now not to go into LA tomorrow. I also claim that if I were offered money to make up my mind now anyway, that would be a paradigmatic WKR. And finally, I claim that there is an asymmetry of motivation in this case, because while it is easy to wait to make up my mind on account of the future information, it is difficult to make it up on account of the reward. My evidence for this is that this case has all of the same essential features of the original toxin puzzle case – I’m making a decision that I’m highly confident that I will reverse. Moreover, all of the essential features of the case can be preserved for much higher degrees of confidence. Since I don’t see how to maintain an asymmetry of motivation in the original toxin puzzle case without allowing it in cases like this, I think it’s clear that there is asymmetry of motivation in these cases.
Hi Mark, we agree that the future information cases seem importantly different from the current cognitive demand cases. The former feel much more like RKRs, though our intuitions may be driven by having thought much more about R/WKRs for evaluative attitudes. So if the future information cases really do exhibit a motivational asymmetry, I think I’m sold that they are RKRs, and I think that’s neat. (I haven’t had a chance to talk to Dan since your reply, so I will only speak for me but I expect he agrees.)
One question, though. You say “while it is easy to wait to make up my mind on account of the future information, it is difficult to make it up on account of the reward.” But that compares a Schroeder reason not to intend to an incentive to intend. But what about incentives not to intend? Wouldn’t that be the apples to apples comparison? It is not so clear to me that you couldn’t you postpone your decision for money in the case you consider above.
Thanks for the response, Mark. In case it’s not too late, here’s a short attempt to be more clear about the central case.
The central thought is that there are two possibilities: roughly, cases where (further) deliberation is warranted and cases where it isn’t. These cases correspond, even more roughly, to what we called the Close Call and No-Brainer cases previously. I don’t mean to put weight on the thought that this distinction is exhaustive, and I concede Mark’s point (if I understand him correctly) that not all cases where deliberation is warranted are cases where it is rational to engage in that deliberation — for instance, when further deliberation would be costly enough, you are rational to just plunk one way or the other.
But our idea is that when deliberation is warranted — either because the object-given reasons make the decision a genuinely close call, or it just isn’t evident which way the weight of those reasons lies (even if there’s a fact of the matter), or if you know that future information might swing the balance (as in Mark’s cases) — then you can follow either Schroeder-reasons or classic, incentive WKRs to not making up your mind. Hence we don’t yet see the motivational asymmetry.
Take a case where I’m just not sure whether the preponderance of object-given reasons favors doing A or B. Maybe if I think hard about it, I’ll figure it out, but I haven’t done that work yet. But now you come to me with a proposition: $1000 not to engage in that deliberation until tomorrow. Suppose too that there’s no (obvious) cost to postponing my decision for a day. Then surely I can decide not to make up my mind yet in order to pocket the money, thereby following a WKR.