Currently,
epistemologists seem to be very interested in practical reasons and practical
rationality. One good example of this is an interesting new paper entitled
‘Knowledge and Action’ by John Hawthorne and Jason Stanley (here) forthcoming
in the Journal of Philosophy.
Usually, when I read epistemologists writing about reasons I feel like entering
a strange new world where things look slightly peculiar.
In this
paper, Hawthorne and Stanley argue that there are interesting relations between
knowledge (as opposed to mere beliefs), rationality and reasons. They claim
that the theories of rational action they are familiar with leave this connection
unexplained. This claim is difficult to assess. The only theory of rational
action they mention is that rational action is maximizing expected utility
(well, they do talk about a proposal by Fantl and McGrath about the connection
between knowledge and rationality). If that is the only theory they know of,
then their claim might well be true.
But, of
course, there is a huge, rich recent literature about rational action,
practical rationality, and reasons that others are familiar with – the work of
Williams, Smith, Wallace, Dancy, Scanlon, Parfit, Broome, Korsgaard, Velleman
and others. Hawthorne and Stanley bring up none of this literature and the views
about beliefs, reasons and rationality developed in this tradition. Couldn’t any of this work shed any light on how to account for the
connection between knowledge and rationality? Have all these people done really
so badly that their work is not even worth mentioning in the context?
If
Hawthorne and Stanley really believe this, then I would like to see some
arguments and investigation. Otherwise, the bold claim above just reveals
ignorance from their part. Sorry – this is just a personal methodological rant.
I do have though also more substantial questions upon (even if I probably
haven’t read the paper well enough yet).
Hawthorne
and Stanley begin from what they think is the
view about rational action: Action is rational if it maximizes expected
utility. The latter is a function of subjective degrees of belief of the
consequences of the actions and their utility. On this view, knowledge about
the relevant consequences is not sufficient (one may know that p without high
enough degree of belief for being warranted to act on the belief that p) or
necessary (high degree of belief can rationalise actions without amounting to
knowledge) for rationality.
Hawthorne
and Stanley resist this picture by referring to folk-appraisals of actions – by
what we would intuitively say to criticise people if they act without knowledge
of the relevant facts. So, they talk about Hannah and Sarah who go down a wrong
street because Hannah believes that the restaurant they are looking for is
there. Hawthorne and Stanley say that it would be natural for Sarah to say to
Hannah that ‘you shouldn’t have gone there since you didn’t know the restaurant
was there’. Likewise, we blame a doctor for negligence if she merely
justifiably believes that the needle is safe without knowing it. They really
give a lot of this sort of examples.
The first
problem is that these appraisals do not seem to be appraisals of rationality per se. Instead, what is
assessed is what the agents ought to have done, what they had reason to do, or
whether they are to be blamed for negligence. Someone holding the maximizing
expected utility view about rationality could well accept all these appraisals
without giving up the idea that the described agents acted rationally. All this
would show is that oughts, reasons, and blame come apart from rationality
(which so many people think is true anyway). One can rationally do what one
ought not to do, what one has reason not to do, and what one is blameworthy
for. Furthermore, Hawthorne and Stanley do not give a view about rationality to
fix the problem but rather a view of reasons. Given that the connection between
reasons and rationality is controversial to say the least – we do not get much
further on the rationality side of things in the end. H & S actually
acknowledge this towards the end as they seem to fall back on the expected
utility view about rationality.
Their own
positive view which is supposed to fit the appraisals better is offered in few
principles. The first one of them is:
1) The
Action-Knowledge Principle: Treat the proposition that p as a reason for acting
only if you know that p.
This
principle is supposed to capture what is implicit in our assessments, i.e.,
that if one acts on a belief that does not amount to knowledge, one violates a
norm and is therefore criticisable. I would have liked to have heard much more
about which norm one is violating and for what precisely one is criticisable
for.
In any
case, the claim here is that knowing that p is necessary but not sufficient for
treating a proposition as a reason. It’s not sufficient because there is
irrelevant knowledge for actions. To get round this, they first define
p-dependency; choice between x1, …, xn is p dependent iff the most preferable
of x1, …, xn conditional on the proposition that p s not the same as the most
preferable of x1, …, xn conditional on the proposition that not-p. With the
help of this definition we get the principle which is gives both sufficient and
necessary conditions:
2) The
Reason-Principle: Where one’s choice is p-dependent, it is appropriate to treat
the proposition that p as a reason for acting iff you know that p.
If I get
this view right, there are propositions that are good reasons. They are of the ‘that
p’ sort. These propositions can become the reasons for which we act only if
they are known. This implies two things. First, if knowledge is a ‘factual’
term, then only true propositions can be reasons. And, second, one cannot act
for a good reason that p unless one knows that p (the proposition that p can be
a proposition that it is likely that q). Thus, being known ‘converts’ certain
propositions, i.e., reasons, into ‘personal’ reasons.
I have two
worries about this account. The first are the all too familiar Dancyan worries
about a category mistake – that the view just gets reasons into a wrong
metaphysical class. I don’t know what H&S think propositions are, but they
must be some sort of abstract, linguistic entities – sets of possible worlds,
functions from worlds to truth-values, structured thoughts, and so on. But,
these are not considerations that count in favour of actions like reasons do.
It’s the snake, its poison, and other such wordly things out there that give me
reasons to be scared and run away rather than the abstract objects that are in
some semantic relation to the snake. Why would they give me reason to run? So,
any view that requires treating propositions as reasons seems to require
treating as reasons things that are not reasons at all. And, that the
propositions are known doesn’t make these propositions that are not reasons any
more reasons. Of course, knowing that p implies that one should regard p, the
thing in the world, as a reason in the case that p really is a reason since
knowing that p guarantees that p is the case.
I do also
have a more substantial worry. In the place I live, the fire-alarm goes off every
now and then. In each case, I know that the fire-alarm goes off, I believe that
the house is on fire, and I run out. So far, it’s been either a false alarm or
a test. My robust intuition is still that I run out for a good reason. My worry
is that the H&S account either gives me no reason at all to run out or it
gives the wrong reason.
The best
potential reason to run out would be the fact or the state of affairs that the
house is on fire. Dancy’s account gets into problems here. He wants to say that
this fact, even when it is a ‘non-existent’ fact as believed by me, is the good
reason for me to run out. But, most people would want to say that the good
reason I run out for is my justified belief
that the house is on fire. The H&S account does not allow me to have this
reason (or the believed proposition that the house is on fire as a reason). My
belief is after all false, so it cannot count as knowledge. Thus, the necessary
condition for my reason is not satisfied.
Of course,
I know that the alarm has gone off. But, that the alarm has gone off seems like
a wrong reason to run out of the house. I could for instance set the alarm off
myself and yet I wouldn’t have any reason to run out. The alarm itself doesn’t
count in favour off anything. It is just supposed to indicate the real reason.
What other options H&S would have? Well, they could say that, given the
alarm, I know the proposition that
the house could be on fire. However, if this proposition was a good reason,
then I would have a reason to run out now because I do believe even now that
this house could be on fire. Maybe they could say that, as I believe that the
house is on fire in the alarm case, I also believe that the house is probably
on fire. That’s true, but does that belief count as knowledge? It could well be
false given that there’s never been a fire when the alarm has gone off. So,
it’s hard to tell what reason they would give for me to run out.
All in all,
I’m not convinced yet. And, I wished they would have engaged a bit with the great
literature on reasons and practical rationality we have and the more plausible views about reasons and rationality therein.
Can’t they get around the Dancyan worry by saying that reason ascriptions ascribe propositions which, if known, alert us to the fact that there are genuine reasons without having to say that the reason itself is _that p_? I mean, if the thought is that when someone is in need of help my reason for action is such a person’s need and not the proposition that this person is in need, there still will be questions about when such reasons ought to figure in practical deliberation and the K account (or minor variant of) is an answer to that question.
Fwiw, I’m not overly enamored with that account because I can’t see how every single condition necessary for knowledge could have potential significance for evaluating the use of a premise in practical reasoning. I can see how a premise’s fit with evidence could be relevant. I can see how a premise’s truth could be relevant. I have a harder time seeing how the non-accidental connection between belief and truth _must_ matter to evaluating the use of a premise in practical reasoning.
Can’t they get around the Dancyan worry by saying that reason ascriptions ascribe propositions which, if known, alert us to the fact that there are genuine reasons without having to say that the reason itself is _that p_? I mean, if the thought is that when someone is in need of help my reason for action is such a person’s need and not the proposition that this person is in need, there still will be questions about when such reasons ought to figure in practical deliberation and the K account (or minor variant of) is an answer to that question.
Fwiw, I’m not overly enamored with that account because I can’t see how every single condition necessary for knowledge could have potential significance for evaluating the use of a premise in practical reasoning. I can see how a premise’s fit with evidence could be relevant. I can see how a premise’s truth could be relevant. I have a harder time seeing how the non-accidental connection between belief and truth _must_ matter to evaluating the use of a premise in practical reasoning.
Clayton,
I’m not sure I follow the first part. I’m not sure what reason-ascriptions could ascribe except genuine reasons. And, if the genuine reasons are not propositions, then propositions should not be ascribed as reasons. But, I’m sure the K-account could be revised enough to make the reasons come out right. Maybe some version of Williamsian existence-condition for a reason based on knowledge could do the trick. But, I don’t think the results would be plausible. I don’t think the existence of reasons could depend on us knowing them – and if the reasons exist independently, then we should take into account whatever reasons there are. Of course, knowing them helps to do so.
But, I’m with you on the second one. The more I think about this, the more it seems to be the case that there are many norms in terms of which we assess actions – rationality, acting on reasons, doing what one ought to do, acting in blameworthy ways, and so on. Maybe some of these norms in some occasions require whatever must be added to the justified true belief to get knowledge. This would already deal with the cases. But, I see no evidence that there was only one major norm for assessing actions that always required knowledge.
Jussi,
I’m glad we’re on board wrt to the second point. That’s really the one I’m interested in. As for the first point, here’s what I had in mind.
Suppose Tarzan is in need of help. Suppose Tarzan’s needs are what give Jane reason to act. It’s true that Tarzan’s needs aren’t propositions, but presumably you’ll get something _very_ close to H&S’s view if you say that Tarzan’s needs should figure in Jane’s deliberation only if Jane knows that Tarzan has such needs. True, you might say, the proposition that he has such needs isn’t the reason for helping, but if the issue is really what epistemic relation must we stand in to reasons if they’re to properly serve as our reasons, there’s a proposal in the neighborhood of the H&S proposal that should suit them fine.
Hi, Jussi. I want the Brewers to win the World Series. I fear that I will someday fall from the seventh floor of my parking structure. And I wonder whether this is evidence of my declining sanity. But I don’t want the proposition that the Brewers win the World Series – what would I do with it? I don’t fear the proposition that I will someday fall from the seventh floor of my parking structure – what could it do to me? And I don’t wonder the proposition that this is evidence of my declining sanity – that doesn’t even make any sense.
Yet wanting, fearing, and wondering are all naturally treated as relations to propositions. Shouldn’t this make you suspicious of the kind of evidence Dancy marshals that reasons are not propositions, either?
Clayton,
I’m sure you are right that there is a proposal along the lines you suggest. One area where formulating that proposal might have bigger consequences are the uncertainty cases. The probabilities of propositions will be different than the probabilities of the states of affairs.
Mark,
Maybe I’m wrong about this, but I would have thought that what the favouring considerations are for actions and propositional attitudes is a question distinct from how to understand propositional attitudes themselves. Even if propositions were relations to propositions, it can still be the case that they are normatively required by things in the world. I think Dancy is on stronger grounds about the reasons than about attitudes. At least the arguments seem better there. I do agree with you that there are reasons to be suspicious about his account of attitudes.
Of course, as you know, he does give an account of propositional attitudes along the same lines. That is, he uses just the kind of examples you give to argue that propositional attitudes like wanting, fearing, and believing *are* attitudes towards things in the world – and that their intentional objects are wordly stuff. Of course, he doesn’t deny that propositions do not feature as the content of fears and beliefs. It’s only that the content is not what is believed to be the case or feared. I know this sounds slightly peculiar. But, there is a point to it. I think he wants to understand the attitudes as a relation between a triplet – the agent, the proposition, and the object in the world.
Hi Jussi,
on an alternative view about your reasons in the fire alarm example, what the fire alarm gives you reason for is to believe that there is a fire. This together with some normative belief of yours may then give you reason to conclude that you have a practical reason to leave the building. When you later find out that there was in fact no fire, you may think that, as it turned out, you really had no good reason to leave the building. You merely thought, justifiably or rationally, that you did.
This seems to me an equally plausible way of thinking of the reasons that the fire alarm’s going off might give you. People like Williams and Parfit would, I think, take that view of your example. I am beginning to suspect though that, for the people who study these questions closely and that have done so for some time, our intuitions about which kinds of reasons different facts give us may have changed from what they were before we started studying these questions and what people write about them.
Because of that, and because H and S, as you call them, appeal to what they claim to be commonsensical judgements, which may seem unintutive to some moral philosophers, it’d be nice to do some experimental philosophy style surveys to see what the folk’s intuitions about these concepts actually are. My guess is that, while philosophers sometimes make very sharp distinctions between what we have reason to do, what we can be blamed for, and what is rational or justified, most people may not draw any such distinctions when they think about what to do, or whether to blame somebody, or to criticize them, and so on. If it turns out, as I suspect, that the folk don’t have these fine-grained distinctions in mind when they do these things, then we may not be able to rely too heavily on what we intuitively believe to be folk-appraisals of actions, in the way that you describe H&S as doing, when we argue about these matters.
To sum up: there seems to me to be different things that we can intuitively say about examples like Jussi’s, and this may be because, like the folk, even philosophers don’t make any sharp distinctions between what’s rational, what we have reason or ought to do, when we don’t philosophize. (Since I have not read the comments on Jussi’s post, I apologize if this is very unrelated to what others are saying about it.)
Jussi,
About the Action-Knowledge Principle, you write:
“I would have liked to have heard much more about which norm one is violating and for what precisely one is criticisable for.”
The norm one is violating is the Action-Knowledge Princeiple. One is criticizable for violating the Action-Knowledge Principle.
You write:
“Instead, what is assessed is what the agents ought to have done, what they had reason to do, or whether they are to be blamed for negligence.”
Our view is not meant to supplant (a version of objective) decision theory, but rather to augment it (see the response to Objection 1). Decision theory is a theory of when someone does the right action. Our theory is about the reasons one has for doing the actions one does. They are theories of different things. The reason decision theory is incomplete is that it doesn’t give a complete account of oughts, reasons, and responsibility. You obviously don’t disagree with us.
As for the “all-too-familiar Dancy worry”; For the reasons Mark said, I have great difficulty finding this compelling.
As to your final worry, you write:
“In the place I live, the fire-alarm goes off every now and then. In each case, I know that the fire-alarm goes off, I believe that the house is on fire, and I run out. So far, it’s been either a false alarm or a test. My robust intuition is still that I run out for a good reason. My worry is that the H&S account either gives me no reason at all to run out or it gives the wrong reason.”
The good reason you run out the door is your knowledge that the fire alarm is going off. Someone who acted solely on the belief that the house was on fire, in those circumstances, may be acting correctly, but for the wrong reason (this is akin to our case of the surgeon who performs a risky but necessary surgery using as a reason that the patient will survive; the action is the right thing to do, but the reason given is wrong).
I think that takes care of all of your criticisms.
Jason,
what is the evidence that in the folk-assessments of actions you refer to what is criticised is the violation of the Action-knowledge principle rather than other norms with which we evaluate actions? Why is it that the doctor is criticised for lack of knowledge rather than causing a risk for the patient? Why is it that Sarah is criticised for lack of knowledge instead of going to the wrong place? I find it much more plausible that the criticisms you give are in fact that latter kind of criticisms instead of being based on the knowledge-action principle.
Decision theory is a theory of when one does the right actions? I find that difficult to believe – I thought decision theory was a theory about rational action. I thought right action would the one supported by sufficient reasons. One can sometimes satisfy the requirements of rational decision theory but fail to act on sufficient reasons. To say that your theory is a theory about the reasons one has sounds like a bit of an overstatement. All it seems to say about reasons is that they have to be propositions that are known and that have an effect on the desirability of choices. That doesn’t quite yet specify reasons. I don’t think Mark gave much reasons to think that reasons are propositions as you would have seen from the reply. I find the idea that they are very uncompelling.
I also cannot see how the knowledge that the fire alarm has gone off could the right reason to run out. As such, it is just a lot of noise. True – the annoyance of the noise is good reason to go out, but that really isn’t the reason. If it were, I would have a good reason to run out even when I’m know that they are testing the fire-alarm. Besides, the same move can be used to avoid criticism in the cases you begin the paper with. Sarah can give her knowledge of the evidence she had of the restaurants location as her good reason, the doctor can give the evidence of the safety of the needle as the good reason, and so on.
I think that takes care of all your replies.
Sven,
I would think that the folk in the fire case would accept in the fire case that you had a reason to go out and only philosophers would deny it. We can make sense of why someone runs out. I agree that irrationality criticisms seem to be part of philosophical tool kit rather than of everyday life – whereas reasons and oughts seem more ingrained in our way of life.
Jussi, I agree that the folk may think that the fire alarm gives you a reason to run. But, I think that the folk would sometimes think of reasons, or find it intuitive to think of them, in the other way. Suppose, for example, that somebody finds out that she is not in fact, contrary to what she has been believing, allergic to nuts. She may then think that all along she really had no reason to avoid nuts, as she has been doing. Or, if somebody said ‘too bad she’s been avoiding nuts, since she actually had no reason to’, then I don’t think that the folk would protest and say that this claim does not make any sense. Also, if somebody said that you really had no good reason to run, since there was no fire, then the folk might very well think that that claim makes sense, which is likely to be why some philosophers, like Williams and Parfit, find this view intuitive.
If this is true, as I believe, then the folk uses ‘reasons’ both in a sort of objectivist way and in some belief-relative way. Of course, to settle what the folk concept of a reason is, we would need to do some empirical studies.
Sven,
I’m getting worried about you. All this talk about ‘folk’ (which sounds a bit derogatory) and empirical studies. I hope studies in Michigan does good for you 🙂 But, I do agree that *we* have two kinds of intuitions about reasons. Some reasons seem to be undermined or vanish as a result of false information. Remember the classic case from Williams about the reason for drinking gin from the class that turns out to have petrol. I think Williams is right that in that case one has no reason at all to drink.
But, in some cases, even when the belief is false, there do seem to exist real reasons – considerations (maybe of non-actual kind or mental states) that really count in favour of the act. Contra H&S I don’t think acting with false information is criticisable at all always. I would like accounts that can capture both of these phenomenon. I wouldn’t like to do this with two senses of reasons but rather by finding the right reasons. I don’t think the fire-alarm *itself* is the right one but that might be just me.
Jussi, thanks for these responses.
I agree with you that it seems wrong to say that, whenever somebody acts on false beliefs, we and others would all criticize her. If my imagined person who’s not allergic to nuts avoids them because she thinks they’d make her ill, then most people, I suspect, would not think she’s being an idiot. They would not want criticize her actions, I think, and nor would I. We may instead think of her as acting wisely.
It might, I think, be that the belief-relative way and the objectivist way of thinking of what gives us reasons are not two different sense of ‘reason’ but instead, as you may be suggesting, two different substantive views of what really counts in favor of acting in certain ways.
What, lastly, is a good example of some consideration that we believe that is false, but which nevertheless is a good reason to act in some way?
I think that the you get that kind of cases when the stakes are high. It’s better to be safe than sorry and that’s why we think that there are considerations that support actions that we wouldn’t do if we knew. If I get a lot of evidence of an immediate disaster, I do have reason to act in ways that would prevent the disaster even if it turns out that the good evidence I had was misleading.
I have to add that the more I think about the H&S view about reasons the less plausible it seems. Here’s another case I’ve been thinking. Take a racist government which believes but does not know that its discriminating policy will cause suffering to the minority.
Now, you have two ways of formulating the supplement decision theory with regards to utility;
a) you might think that utility is satisfying desires. In this case, the government is rational in carrying out the policy (or is even acting rightly) [a bit odd], lacks reasons to do so [right] but also lacks reasons not to do so [wrong].
b) you might think that utility is some agent-neutral value like overall utility. In this case the government is irrational in carrying out the policy (which would not now maximise expected utility) [not sure what to say about that], lacks reason to do so [right] but also lacks reasons not to do so [wrong].
Thanks, Jussi, for that example. There may also be cases in which the stakes don’t have to be high, but in which we still have reasons of the kind you had in mind. Maybe, for example, there being a chance that one could win something good could be a reason to enter some lottery even though, as one later finds out, one does not win the lottery.
Jussi,
You write:
“what is the evidence that in the folk-assessments of actions you refer to what is criticised is the violation of the Action-knowledge principle rather than other norms with which we evaluate actions?”
The criticism you give in your post is that we don’t say what norm is violated in the cases we describe. The answer to that criticism is that the Action-Knowledge Norm is violated in the cases we describe. This is the simplest explanation of the cases we describe.
You have now switched your criticism. Now, you suggest that maybe some other norm explains why we find the actions criticizable. The alternative you suggest (that Sarah is being criticized for lack of knowledge) is not plausible, since in the cases we discuss, someone’s *action* is being criticized. Be that as it may, it surely is not incumbant upon us to reply to all possible alternative explanations of the data, especially when it’s not clear that any are as straightforward as ours. In any case, you’ve clearly abandoned the initial criticism (which was, recall, that we fail to say which norm is being criticized), and we are now initiating a different discussion (whether there are alternative explanations of the data).
It’s good to know that you find Dancy’s criticism of the view that propositions aren’t reasons compelling. Like Mark, I don’t see any reason to think that Dancy’s reasons are any less forceful in the case of wants, fears, and desires (that is, if cogent, they would also show that propositional attitudes are not relations to propositions). I’m happy to get your opinion about what you find “uncompelling”, but I’m not inclined to give what you find compelling or uncompelling much weight, especially when what you find compelling has such disturbing consequences.
There is an extensive discussion in the paper of the last paragraph of your reply (see p. 15, the reply to Susanna Siegel’s objection).
Finally, I’d like to point out that nothing you say warrants the harshly condescending rhetoric of your original post, that our paper “is a strange new world in which everything looks slightly peculiar” (and, the claims in our paper “just reveal ignorance on part”). You’ve raised one objection that has a simple answer (namely what norm is violated in the cases we describe), one ‘objection’ that is in fact what we say in our paper (your ‘first problem’), one argument that we categorically reject (an argument that would entail that the multiple relation theory of judgment is correct!), and a second objection that, once it is pursued, turns out to be one that is responded to in the paper.
But perhaps I’m simply too ignorant to be sufficiently chastened by your superior wisdom.
Sven,
Nowhere do we say in our paper that:
“whenever somebody acts on false beliefs, we and others would all criticize her.”
In fact, we quite clearly deny this. As we repeatedly emphasize, someone can violate a norm, but for a perfectly excusable reason. The fact that I lie to the Nazi who comes to my door asking where the Jews are doesn’t show that it isn’t a norm that one oughtn’t lie. Many times, violations of norms are perfectly excusable.
Jussi,
You write:
“Contra H&S I don’t think acting with false information is criticisable at all always. I would like accounts that can capture both of these phenomenon.”
For the reason given in the above reply to Sven, this isn’t ‘Contra H&S’. We are very clear that there are cases in which acting with false information is excusable. One has still violated a norm, but for an excusable reason. Hence, one isn’t subject to criticism. So our account is an account that captures both of these phenomenon. So you should prefer it.
Sven,
Your lottery case is extensively discussed in the paper (see the reply to Objection 2); also the discussion on p. 10 of my book *Knowledge and Practical Interests*.
Jason,
If you’re still hanging around I want to say that I’m with you and Hawthorne about 90% of the way, but I wonder what you think about using Gettier cases as both counterexamples to JTB analyses of knowledge _and_ to the suggestion that you oughtn’t use p in practical deliberation unless it’s known.
Do you bite the bullet? I’m dying to know.
Hi Jason,
thanks for these responses. As my remarks suggest, I have not yet read the paper, but have instead based my hasty comments wholly on Jussi’s posts about it. I shall however promptly read it, because it seems to me very interesting.
All the best, Sven
Jason,
I’m sorry. The original post was not meant to be condescending and neither I am claiming any superior wisdom. Far from it.
By saying that it is a strange new world, I meant that, well, I read papers and books about practical rationality and reasons pretty much daily. The views discussed in this literature is supposed to be on the same topic as your paper, but when you read it things look very different from what assumptions are made when epistemologists discuss reasons. I’m sure both sides can learn from one another from the material that there is. I was kind of taken by the claim you made that none of the previous views about practical rationality give any help for understanding the connections between rationality and knowledge. But, maybe you know why – I would like to learn this too.
The question about which code is violated in the post was not a criticism but a question of clarification. I thank for the reply. I didn’t change my criticism but began to think is violation of that norm really behind the data you present with the cases. You say that violation of this norm would be the simplest explanation. It could be even though I’m not sure yet about that. I think you misread my reply (which probably wasn’t clear enough). I thought your view would be that Sarah is criticised for taking a proposition she doesn’t know for a reason. My proposal (‘instead’) was that she is criticised for taking them to a wrong place (as opposed to the norm you refer to). That seems like an action.
We haven’t really got into Dancy’s reasons for why propositions are not reasons which are independent of what he thinks propositional attitudes are targeted towards. As I wrote to Mark, I don’t see how these are related questions. In any case, we do seem to have a clash of intuitions here about whether reasons are wordly things or semantic intuitions. I’d just like to note that it’s not just me and Dancy who thinks it is the former. In reasons literature, it seems pretty standard to think that they are facts, states of affairs or the like. But, I know that questions about burden of proof are difficult.
Susanna’s point you discuss there seems to be slightly different than what I had in mind. As I see it that point is about the epistemic probalities whereas my point was about the proper objects of the knowledge that counts as reason-providing.
But, I’m sorry about the tone of the original posting. It did get the discussion off the rail. I’m sure there would have been a more constructive debate to be had of the really good paper.
Oh yeah – about the last comment. We have two choices:
1) globally applying norm with excuses for some occasions
2) a hedged principle that does not apply in some contexts because of disablers.
It’s an interesting question how to deal solve disagreements of this sort that we can find from many contexts. 1) seems to require that in the excused cases there was/is some prima facie violation to be excused. I’m not sure that there is evidence for this with the norm we are talking about and the cases.
I’m not sure whether this is the done thing, but I’d like to put in a plug for a paper that I’ve cowritten with Stephen Kearns, “Reasons as Evidence”. It is relevant to the above discussion (although it doesn’t engage with Hawthorne and Stanley’s stimulating paper). As Jussi says above, “Currently, epistemologists seem to be very interested in practical reasons”. He may find our paper very strange, but I hope he’ll at least be satisfied that we are engaging with arguments from the ethics reasons literature. Please note that this is a first draft. Comments are welcome (daniel.star@anu.edu.au). Click on my name below to be taken to a pdf of the paper.