Welcome to our Ethics review forum on Sarah McGrath’s Moral Knowledge (OUP 2020), reviewed by Eric Wiland.
Below, you’ll find a description of the book, as well as a condensed version of Eric’s review. Sarah’s response will appear in the comments. Please join Eric and Sarah in continuing the discussion!
Book Blurb:
Compared to other kinds of knowledge, how fragile is our knowledge of morality? Does knowledge of the difference between right and wrong fundamentally differ from knowledge of other kinds, in that it cannot be forgotten? What makes reliable evidence in fundamental moral convictions? And what are the associated problems of using testimony as a source of moral knowledge? Sarah McGrath provides novel answers to these questions and many others, as she investigates the possibilities, sources, and characteristic vulnerabilities of moral knowledge. She also considers whether there is anything wrong with simply outsourcing moral questions to a moral expert and evaluates the strengths and weaknesses of the method of equilibrium as an account of how we make up our mind about moral questions. Ultimately, McGrath concludes that moral knowledge can be acquired in any of the ways in which we acquire ordinary empirical knowledge. Our efforts to acquire and preserve such knowledge, she argues, are subject to frustration in all of the same ways that our efforts to acquire and preserve ordinary empirical knowledge are.
Excerpt from Eric’s Review:
[I]n Moral Knowledge, Sarah McGrath clearly and powerfully argues that we can acquire moral knowledge in all the ways we come by ordinary empirical knowledge. Just as I can know that it’s now raining by perception (seeing and feeling the raindrops), by inference (the people outside are using umbrellas), or by testimony (my mother outside is texting me about the weather), so too I can gain moral knowledge by any of these channels. […] […] The book has an introductory chapter, a concluding chapter, and, in between, four substantive chapters each of which is devoted to one particular subtopic: the method of reflective equilibrium, testimony and expertise, observation and experience, and losing moral knowledge.The chapter on the method of reflective equilibrium (MRE) is the best discussion of the topic I know about. McGrath is aptly pessimistic about its powers. She argues that on its most defensible interpretation, MRE takes for granted that we typically already have some moral knowledge, knowledge that the method hopes to extend by making our moral views more coherent. But this means that we don’t get all our moral knowledge from MRE. Much like testimony […], MRE can extend only what already exists.
[…] McGrath argues that when we reflect upon our moral convictions, we should prioritize neither our general moral views nor our lower level moral judgments. […] If we weren’t justified in being confident about one level, we wouldn’t be justified in being confident about the other level; and if confidence in neither were justified, then MRE couldn’t take us anywhere good. Fortunately, we do already have some moral knowledge, and so MRE can extend it modestly. MRE, however, might be best at delivering not moral knowledge, but moral understanding. When we align our general moral views and our particular moral judgments, we better grasp why those particular moral judgments are true. The more general principles can explain the facts captured by our particular moral views, and grasping these explanations is one form moral understanding takes.If the method of reflective equilibrium is better at delivering moral understanding than moral knowledge, the opposite can be said, McGrath argues, about the method of testimony. […]
Although testimony is indeed a source of moral knowledge, McGrath argues that the epistemologically interesting issues concern not moral testimony per se but the broader issue of moral deference. The putative problem is that if you hold a moral view because you’ve deferred to someone else, then you typically don’t understand why that view is true. This is problematic for at least two reasons. First, when you judge something to be wrong, you are expected to be able to cite facts in virtue of which it is wrong. But if you have completely deferred to the view of another, then, she argues, you won’t be able to meet this expectation. (One, however, should wonder whether it is difficult to learn these facts too by testimony.) Second, it’s an ideal of moral agency to be able to do the right thing for the right reason; but if you know only what’s right, and don’t understand why the right thing is right, then you won’t be able to do the right thing for the reasons that make it right. So, acting on the basis of moral deference is, at best, second-best.
I’ll […] briefly flag a couple worries about this criticism of moral deference. As I’ve argued elsewhere, in typical responsible cases of (adult-to-adult) moral deference, the hearer does grasp the various operative reasons (or goods and bads) at stake, but defers to a speaker about how to weigh them up. For example, if you are a minimally competent adult, you already know that it’s pro tanto bad to allow five people to die, that it’s pro tanto bad to kill one person, yet remain unsure whether it’s wrong to turn the trolley, or to kill a healthy patient for their organs. Thus you might defer to someone in a better position to know such things. But even if you do so defer, 1) you could still cite facts in virtue of which one of the options is wrong (“That’s allowing five people to die!”), and 2) do the right thing for the right reason (“I’m turning the trolley to save five people.”) So I think McGrath doesn’t completely show that moral deference is problematic in the ways she describes. […] […] [I]n the most ambitious chapter, McGrath argues that experience and observation can contribute to moral knowledge in the very ways they contribute to ordinary knowledge. One way experience contributes to moral knowledge is by enabling us to entertain the relevant contents: you can know that, say, murder is wrong only if you have the concept murder, and experience can enable you to grasp that concept. Experience can also trigger moral knowledge. As a young man, Einstein was an absolute pacifist, but witnessing the Nazi era let him to conclude that violence could be just. […]
More ambitiously, McGrath argues that observation and experience can confirm and disconfirm one’s moral views, even those view that are also knowable a priori. Non-moral observation can disconfirm one’s moral views, because it can make one’s overall view less coherent in a way in which it is most reasonable to rectify this incoherence by giving up one’s original moral view. Likewise, when non-moral observation makes one’s overall view more coherent, it thereby tends to confirm one’s moral views thus implicated.
[…] Suppose Ted initially believes both 1) that same-sex marriage is intrinsically wrong and shouldn’t be condoned, and 2) that social recognition of same-sex marriages would have bad consequences, including leading to an increase in the divorce rate. On this second thought, even though Ted thinks that the wrongness of same-sex marriage is not because of any bad consequences flowing from its recognition, he still believes it would lead to bad consequences at least in part because it’s (already) intrinsically wrong.Now suppose that at some later time, Ted observes that the legal recognition and social acceptance of same-sex marriage does not lead to an increase in the divorce rate […] Ted’s views are now less coherent. He could adjust his views in various ways to make them coherent again, but suppose he retains the view that recognition of intrinsically wrong practices lead to bad consequences, but decreases his confidence that same-sex marriage is wrong. This adjustment is rational, and it shows how Ted’s original view that same-sex marriage is wrong may be disconfirmed by observation.
Let me flag a worry […]. Ted’s moral view is disconfirmed by observation only because he is confident in the complex conditional: if same-sex marriage is wrong, then if same-sex marriage becomes socially accepted, the divorce rate will rise. Observation shows him that the main consequent of that conditional is false. So if Ted retains his confidence in the conditional, he will need to lower his confidence in the main antecedent (viz., same-sex marriage is wrong). This is how observation can disconfirm a moral view.
But this structure is available to any domain, not just morality. Suppose Ted also initially believes, contrary to Euclid’s Theorem, that there is a largest prime number. He also holds that if there is a largest prime number, then if Euclid’s Theorem and other false mathematical views become widely held, technological advancement will decline. Suppose, however, he observes that as more […] schoolchildren are learning Euclid’s Theorem, technology continues to advance […]. Ted’s views are now less coherent. […] [S]uppose he retains the view that widespread mathematical ignorance hurts technological development, but decreases his confidence that there is a greatest prime number. This adjustment is rational, and it shows how Ted’s original view that there is a largest prime number may be disconfirmed by empirical observation.
But Euclid’s Theorem, of all things, isn’t confirmable empirically. Proof seems necessary. […]
Moving on, the final substantive (and most original) chapter discusses the question whether one can lose moral knowledge. Gilbert Ryle famously claimed it was ridiculous or absurd to say “I’ve forgotten the difference between right and wrong.” If correct, this might seem to show that moral knowledge is not like knowledge gained by expertise or ordinary empirical knowledge, which can be forgotten. […]
McGrath [replies by] arguing that while you can indeed lose moral knowledge, doing so corrupts you in a way that makes it difficult and awkward to recognize that you’ve been so corrupted. […] What makes it absurd to say that you’ve forgotten the difference between right and wrong is not that this proposition can’t be true, but that when it is true, you’re not in a good position to recognize its truth. So, while ceasing to care about the right thing is indeed one way to lose moral knowledge, McGrath aptly argues that there could still be other ways to do so, including by forgetting things.
[…]Space doesn’t permit me to summarize McGrath’s critical discussion of Ronald Dworkin’s view that our moral beliefs are relatively immune to being undermined by discoveries about their etiologies, but her reply to Dworkin is one of the most compelling arguments of the entire book, and I’ll leave that as a teaser for you to check it out for yourself.
Thanks so much to PEA Soup (and Jordan MacKenzie), Sarah for her wonderful book, and Eric for his great review.
Small autobiographical bit – I was inspired to write my dissertation on moral perception in part because of McGrath’s excellent earlier work on wide reflective equilibrium. So I found myself excitedly nodding along to that part of the book.
Now, I have not finished the book, so it is certainly possible I missed something, but I feel that the issue raised in this chapter does not get cleanly resolved anywhere in the remainder of the book.
As I see it, the issue that McGrath rightly raises here is that WRE cannot be the full story in moral epistemology, since it *presupposes* some other route to moral knowledge that we then use to engage in reflective equilibrium on.
However, while the stories in the book about moral testimony and (dis)confirmation of moral beliefs by observation may be compelling in their own right, I don’t see that they don’t just suffer from the very same principled limitations that McGrath points out about WRE – they only work if we *presuppose* some other route to moral knowledge. In the testimony case, this is clear, as Wiland points out in his review (and McGrath acknowledges in the book). But McGrath’s arguments for moral knowledge by observation *also* presuppose that we have some alternative route to moral knowledge, as far as I can tell. This is clearly true in the case of Ted, by McGrath’s own lights (I take it). – Ted has certain conditional moral beliefs about what kinds of non-moral observations would occur were his moral beliefs true. They don’t occur, so he has evidence against his moral beliefs. But this kind of coherence based reasoning, even if it is Bayesian, is just wide reflective equilibrium, isn’t it?
The closest McGrath comes to giving us a positive route to moral evidence that isn’t just coherence based reasoning on prior moral knowledge is in her idea of moral conditioning. But since conditioning itself requires an independent presupposition about the relative reliability of our social peers, isn’t this just smuggling in moral testimony, albeit in a group rather than individual sense?
Hi everyone! Thanks so much to Eric Wiland for taking the time to read my book and to write this thoughtful review of my book! And thanks to Jordan MacKenzie for organizing! I am grateful for the opportunity to engage with Eric and interested Pea Soup readers about Eric’s review.
I wanted to start things off by commenting on two objections that Eric raises in the excerpt above.
The first objection has to do with some of the things that I say about moral deference. First, I think Eric raises a great point in his parenthetical comment, when he suggests that one could defer to someone else, not just about what to do but also about the reasons why. In my book I consider the example of a man who tells you that he believes eating meat is wrong, and when you ask him why, he says he says he has considered the arguments against eating meat, and they leave him cold. He defers to his wife about moral matters because he believes that her moral judgment is better than his is. He treats her as a moral expert. I take Eric to be pointing out that we could add to the example: he defers to her not only about whether eating meat is wrong but also the reasons why it is. So now he knows that eating meat is wrong because it has such-and-such features….isn’t he in the position to do the right thing because it has such-and-such features, in other words, for the reasons that make it right?
My reply is that even if he defers to her about the reasons why it is wrong, he might still not be in a position to perform the action *on the basis of* the relevant reasons. In order to act *on the basis of* the reasons which make the action the right thing to do, the agent must appreciate those reasons *as* reasons, not merely have propositional knowledge that they are the reasons for performing the action. Compare: you could know both that some mathematical claim is true and that some collection of propositions makes up a canonical proof of it, but not yet see the collection of propositions *as* a proof of the claim: a math student who defers to her teacher might not yet be in a position to believe the claim on the basis of the proof in the way that the teacher is. Similarly, an agent who is dependent on pure moral deference for her knowledge that an action has certain right-making features is typically not in a position to perform the action on the basis of those reasons, in the way that someone who genuinely grasps the connection between the two is.
In the main part of Eric’s objection, he says that in a lot of these cases where it is going to make sense for one adult to defer to another, the issue is going to be: how do I weigh competing reasons against each other? That seems right! But Eric thinks that in this kind of case, were you to defer about what to do “1) you could still cite facts in virtue of which one of the options is wrong (“That’s allowing five people to die!”), and 2) do the right thing for the right reason (“I’m turning the trolley to save five people.”)” I agree with 1) but I am not so sure about 2).
Here is an example. Liz Harman considers a case, in her paper on moral ignorance (“Does Moral Ignorance Exculpate?”), of someone who is raised to believe in “an ethics of ‘everyone should take care of his own.’” (2011: 457) This person “goes into the family business and believes in an ethics of deep loyalty to the family business group and no moral obligations to those beyond it” and kills a store owner who won’t pay for protection. So now consider a variation on the example where the guy is torn: he realizes that he does have moral obligations to people beyond his family but is unsure whether the deep loyalty to his family outweighs those obligations in this case. And suppose that he defers to his wife, who tells him not to do the killing. He would be in the position that Eric describes: he would know that these reasons having to do with outsiders outweighed the reasons having to do with the family loyalty in this particular case, so in this sense he would know why he should refrain from killing. But I don’t think that means that he appreciates the weight of the relevant reasons or performs the action on the basis of those reasons.
The second objection that I wanted to comment on is about Ted. Eric’s discussion begins with “Let me flag a worry” and ends with the line: “But Euclid’s Theorem, of all things, isn’t confirmable empirically. Proof seems necessary.”
I actually think that Euclid’s Theorem and mathematical claims in general are confirmable empirically. As I say in the book, we should distinguish the view that a domain of knowledge is a priori in the sense that knowledge in the domain *can* be acquired by a priori means (i.e., without the benefit of empirical observation), from the much stronger view that knowledge in that domain can be acquired by a priori means and *only* in that way. In general, to say that a given subject matter is a priori is to say that knowledge of that subject matter is *potentially* available from the armchair. That is consistent with the view that truths about the subject matter can also be known in other ways. The vast majority of those who believe Euclid’s Theorem do not believe it on the basis of understanding the proof; they believe it on the basis of deference to others. That is, they believe it on empirical grounds. Now I agree that it sounds very odd to say, “Euclid’s Theorem is confirmable empirically.” And I agree that proof is necessary to “confirm” it if “confirm” means something like “prove”! But that is consistent with the claim that I am making, which is that empirical observation can confirm/disconfirm a moral view, in the sense of making it rational to raise/lower one’s credence in that view.
Thanks for doing this, Sarah!
I should have been more careful when discussing the Euclid’s Theorem case. I agree that in some sense it is confirmable empirically. We can certainly get testimonial reason to believe ET. But there seems to be some stronger sense of confirmation that seems empirically unavailable. Perhaps no English word unambiguously picks this out. Or perhaps this sense of confirmation is really just understand-why: you can’t understand why ET is true merely empirically.
I guess the point I was going for is that the book’s argument that moral knowledge is empirically confirmable has surprising implications about *all* of our knowledge (mathematical, logical, philosophical), and it’s good to be aware how broad this is. Or, do you think that there is *any* claim that’s knowable *only* a priori? (“I exist”?)
About my first objection — I think Sarah’s reply to me helpfully pushes the dialectic along. She is right that, say, in the store owner case, the protagonist might refrain from killing the store owner on his wife’s say-so, while insufficiently appreciating the reason not to kill the store owner, such that his refraining lacks moral worth.
Whether it does lack moral worth, I think, depends upon *why* he defers to his wife. If he defers merely in order to keep the family peace, then I agree with Sarah. But if he defers to her because he thinks she can weigh up the reasons better than he can, and he accepts her testimony about *that*, then I don’t see why he is not then in a position to act (or refrain) from moral worth. After all, he knows 1) which action is right, 2) which reasons make it right, and 3) that these reasons outweigh competing reasons. If this doesn’t suffice, then I wonder what other agents could have that he still lacks.
I do agree that there are some things that testimony can’t do: if someone tells you that you are in pain, and what they say is true, and you believe them for that reason alone, there is still something very strange going on! But I don’t see how moral knowledge is anything like personal pain knowledge.
Hi Sarah and Eric! Sarah, I haven’t read your book yet, but I’m looking forward to reading it soon.
I’m wondering if the two of you are talking about different kinds of cases when you talk about testimony. Sarah seems to be talking about a case where someone has deferred that X, Y, and Z are reasons not to φ, but isn’t in a position to φ on the basis of X, Y, and Z because he doesn’t appreciate them *as* reasons. Eric seems to be talking about a case where someone hasn’t deferred that X, Y, and Z are reasons not to φ, and does appreciate them as reasons, but has deferred on whether they are jointly *sufficient* to make φing right. Am I characterizing things right so far?
If so, I think what Sarah’s already said doesn’t quite deal with Eric’s case, but she can say something pretty similar to what she’s already said that does. Sarah says, “In order to act *on the basis of* the reasons which make the action the right thing to do, the agent must appreciate those reasons *as* reasons, not merely have propositional knowledge that they are the reasons for performing the action.” Similarly, she could say, “In order to act *on the basis of* reasons that are sufficient to make the action right, the agent must appreciate those reasons *as* sufficient reasons, not merely have propositional knowledge that they are sufficient to make the action right.”
Sorry, there’s an errant “not” in a couple of my sentences. They should read “Sarah seems to be talking about a case where someone has deferred that X, Y, and Z are reasons to φ” and “Eric seems to be talking about a case where someone hasn’t deferred that X, Y, and Z are reasons to φ.”
Thanks, Keshav. Good point: there are indeed two different kids of cases. I think Sarah and I completely agree that an agent cannot act from moral worth if the reason that makes the action is right is one the agent doesn’t even appreciate as a reason. (“I shouldn’t kill him because it’s pro tanto bad to kill human beings? Well, if you say so!”)
I want to understand better, however, both 1) what it is to appreciate reasons as *sufficient* reasons, and 2) why think *that* appreciation so understood is really needed for morally worthy action. I mean, I do see how it is in some ways *better* to appreciate reasons as sufficient reasons (however we spell that out) than not to appreciate them as sufficient reasons. But to grasp why attaining this ideal is unnecessary for morally worthy action, I think it’s better to think about truly hard cases, rather than examples where the protagonist looks monstrous (a mob enforcer), remarkably benighted (the one who thinks animals don’t feel pain), or otherwise extraordinary in a bad way.
Thanks, Eric. I’m still not clear on why this is unnecessary when it comes to full moral worth. To take a case from my own work on moral worth, let’s say I have to decide whether to tell you a hard truth or lie and spare your feelings. I appreciate the fact that it would hurt your feelings as a reason not to tell you the hard truth, and I appreciate the fact that it would be disrespectful to deceive you about it as a reason to tell you the hard truth. But I can’t figure out which reason is stronger, so I defer to Sarah about it. This moral deference ends up determining what I do. Isn’t this “second-best” in precisely the way Sarah is talking about?
Of course, in such cases, we might want to admit of degrees of moral worth. But as long as we’re sticking to the binary and talking about moral worth par excellence, I don’t see why appreciating the balance of reasons for oneself isn’t necessary.
Thanks, Preston. You raise a good worry about coherence. You wonder whether social conditioning requires an independent presupposition about the relative reliability of our social peers, and if so, whether this smuggles in group testimony about morality. Do our moral views have any real contact with the (non-social?) world?
I suppose an Aristotelian (for one) might emphasize how conditioning involves habituation, and that acting in light of the conditioning can reshape one’s emotions, which, if morally cognitive, might provide the ‘friction’ that a coherentist account seems to lack. We certainly need to understand better how acting justly (say) reshapes one’s emotions, and how emotions can be a source of moral knowledge. I wish I had a good view here!
Keshav, your example is a fruitful one to think about. A competent person might find themselves in that situation.
In this example, you defer to Sarah, and let’s presume you do so because you know she is wiser than you. I agree that in one way it’s still only second-best; there is *something* good about you not needing to defer. (Elsewhere, I argue that in other ways there is something good about deferring, but I’ll stay on track now.) I want to know *why* you or others think that this means that your telling the truth (or sparing my feelings) still isn’t morally worthy, or even as morally worthy as it would have been had you not so relied on Sarah.
I worry that this disagreement boils down a difference of ‘intuitions’ about the case. But if so, I think the burden of argument is on the philosopher who wants to say there is an asymmetry.
Hi Eric!
Your Aristotelian idea is, I think, promising. Emotions are a plausible way to go to try to ground (something like) foundational moral knowledge, and I think the idea of habituation and conditioning to get your emotional systems properly attuned is perfectly compatible with this idea.
There are then of course some deeper worries about the metaphysical status of moral reality on McGrath’s view and why we would expect things with that status to ‘bump up’ against emotional experiences in the kind of way to give us epistemic (or even semantic) access. But that may be a question that is outside of the scope of the book, which is fair enough. (Though any thoughts you or Sarah might have would be appreciated!)
Hi Eric, Preston, and Keshav! I am not a regular in the blogosphere so I am not exactly sure about whether it is bad manners to reply to what you all were saying a few comments up–I hope it is okay! 🙂
First I want to say thanks Preston–for the comments and for the kind words–I am feeling very honored and flattered by the “small autobiographical bit”!
In response to the first comment you posted, about reflective equilibrium: yes my claim about reflective equilibrium reasoning is that it can’t be the full answer to the traditional epistemological question of where moral knowledge comes from. I am saying that because some friends of reflective equilibrium have presented it as though it is, in fact, *the* answer to this question. But I totally agree that neither testimony nor the kind of disconfirmation of a moral belief by a non-moral observation are the whole story, either! What I call the “working hypothesis” of the book is that any way by which we arrive at ordinary empirical knowledge is also a way by which we can arrive at moral knowledge. The idea is that, contrary to what many theorists have presupposed, we do not need a *special* story about where moral knowledge comes from.
So I think that some of moral knowledge is perceptual, some is inference to the best explanation, some is a priori, some is from testimony….This is a sweeping claim, and more than I defend in the book, so my strategy is to focus on the “pressure points,” where people have thought, “we can get empirical knowledge that way, but we can’t get moral knowledge that way”–for example, testimony and empirical confirmation.
Thanks Keshav and Eric! Again I am a little behind here! But this is what I was going to add, re: the stuff about moral worth:
I agree with you it might be useful to consider a non-monster, more ordinary example! (I realize now Keshav has already provided a perfect one from his own work–sorry for multiplying examples!) Here is one adapted from Gideon Rosen’s 2004 paper “Skepticism about Moral Responsibility”:
Bill knows that it’s just plain wrong to lie to your wife about where you’ve been. The trouble is that he also knows that if he tells the truth, he will suffer.
Suppose that in fact, the reasons against lying are stronger and Bill should “tell the truth and face the music.”
Okay that’s the Rosen example. Now, adapting and continuing the story for our purposes: suppose it really seems to Bill that he should lie. He decides to dial-a-phronimos, just to be sure. He explains the case to the and the phronimos says: yeah, no. It would be totally wrong to lie!
So suppose Bill defers to the phronimos and tells the truth. Here are two questions: (i) does Bill’s action have moral worth? (ii) did Bill do the right thing *on the basis of* the reasons that make it right?
I think the answer to (i) is yes and the answer to (ii) is no. (Different people mean different things by “moral worth” but I am following Arpaly and using it as the thing that goes with praise and blame.) I think that you can be praiseworthy for having the sense to defer to an expert when she’s more likely to get it right, but that in Bill’s cases he didn’t fulfill the ideal of responding to the right-making reasons.
Eric: You say “I want to know *why* you or others think that this means that your telling the truth (or sparing my feelings) still isn’t morally worthy, or even as morally worthy as it would have been had you not so relied on Sarah.”
Kant says that actions lack moral worth if their connection to the moral law is “contingent and precarious.” In the case I gave, part of what seems worrisome about it is that, because I failed to myself appreciate the balance of moral reasons, I could have, if not for Sarah’s advice, easily done the wrong thing. So, my action is kind of morally “unsafe.” Ultimately, I think this is just a heuristic for when actions lack full moral worth, but it’s a good heuristic, because right actions tend to be unsafe when they result from a deficient understanding of what’s right. If the deficient understanding involved in acting for right-making reasons without appreciating them as such defeats moral worth, then shouldn’t the deficient understanding involved in acting for sufficient right-making reasons without appreciating them as such also defeat moral worth? As you say, the burden of argument is on the philosopher who wants to say there’s an asymmetry!
Sorry Sarah, I replied without seeing your last comment. It seems like I’m trying to argue for something stronger than you think in response to Eric, so maybe my last couple responses aren’t relevant!
(PS: Doug Portmore just gave a really interesting paper offering an account of moral worth at WiNE, where “moral worth” is what Keshav is talking about here (the connection to the moral law is not ‘contingent and precarious’). There was some discussion about whether/how this kind of “moral worth” comes apart from Arpaly’s, which (I believe!) goes with praise and blame.) 🤔
(Oops sorry that anonymous person was me, Sarah. :))
I just had to unclog a kitchen drain, but now I’m back for a bit!
We all agree that Bill isn’t ideal; in one way, he’d be a better person if he didn’t rely on others. But why think that Bill then doesn’t tell the truth *on the basis of* the reasons that make telling the truth right? After talking to the wise person, Bill knows what’s right, and why. If asked “Why did you tell the truth?”, Bill’s answer (his reason) is as good as anyone’s. He seems to me to be indeed “responding to the right-making reason(s)”. He’s not responding to them as *directly* as some others might. But is direct response to reasons necessary for morally worthy action; and if so, why? That, I don’t grasp myself.
First, I think the “contingency” thing is but a guise. If there had been an invisible hand that made it true that selfish motives always led to moral actions, it would not make the action done for selfish reasons morally worthy. The “non-accidentality” is in the connection between the reason that motivates you and the reason for which the action is right: if what makes the action right is that it’s universalizable, and your reason for action is “it’s universalizable”, your action is morally worthy. The metaphysical connection between your motive and the content of morality is what counts, not “stability” or even “modal robustness”.
Isn’t the Bill case still like Sarah’s original cases, and not like the kind of case Eric has in mind? Or am I getting confused?
Keshav,
Worries about safety could be alleviated by considering a perfect moral testifier. Think of Socrates’s daimon, which says “Stop!” whenever he thinks about acting foolishly. Nothing contingent and precarious there! So, when Socrates so refrains from doing something, and he solo-knows only the pro tanto reasons for and against, is his refraining morally worthy? If the answer is no, then it seems like something *other* than (or in addition to) safety is doing the work.
(Me myself, I suspect that I’m much more reliable when relying on, say, my spouse, than I am when deliberating solo — for me, I don’t need a perfect testifier to become safer.)
Second, about moral experts: I think we should not forget the “Sartrean” character of appeals to moral expertise. Sartre said that whom you consult on a moral question already shows half of your answer – for example, if you consult a priest, it means you are a Catholic, and if you consult a priest who opposed the Nazis, you are anti-Nazi. Whom we admit as an expert “comes from” some idea we have, a good or a bad one, of what right-making features of actions are. For example, some people regard mother Teresa as a moral expert. I don’t, because she hated birth control and pain killers. The difference between me and the Mother Teresa fan is in our ideas – however vague or disjunctive – of what morality is about. This means to me that you can get some credit for treating the right person as a moral expert. You might think that means there are no real moral experts and that might be true.
(Re my previous comments, I assume here that I and the Mother Teresa fan both know about her record regarding pain killers and contraceptives).
Eric: Like I said, I think safety is a heuristic, not what’s doing the work. My point in bringing it up was just that a lot of us tend to think that when the agent wouldn’t have reliably gotten things right without the testifier, that’s a mark against the moral worth of their action. But I agree with Nomy that the ultimate explanation of that isn’t any modal connection. I think what ultimately matters is an explanatory connection.
K,
Fair enough, and I apologize for not acknowledging that the bit about safety is a heuristic. Still, when Bill trusts sounds moral testimony, and acts on it, doesn’t the explanation of his action still involve a good understanding of what’s right? It’s just not *Bill’s* understanding that so explains what he does, but that of another. So, I guess the question then becomes: can a testifier’s understanding make Bill’s action morally worthy, when the former partially explains the latter? (My suspicion: it depends)
Since this seems to be a meta-ethical discussion, I take it that comments about the scope of moral significance would not be irrelevant.
One question being addressed here is: What more, if anything, does appreciating-reason-giving-force get you, morally speaking, over and above, as Eric Wiland puts it, (1) knowing which action is right, (2) knowing which reasons make it right, and (3) that these reasons outweigh competing reasons. There have been interesting competing answers, but no one has questioned whether the significance of such appreciation is indeed moral significance, that is, its role in our moral assessments of actions or persons. Maybe such appreciation’s significance to our lives is not strictly moral.
What’s the significance of not only knowing which mathematical theorem follows and which statements it follows from but of *understanding how* it follows from them? What’s the significance of not only knowing that a sonata expresses melancholy and which features of it are contributing to the expression of melancholy but of *hearing* the melancholy so expressed? Whatever the significance is, I’m not so sure it’s strictly mathematical or strictly musical.
The significance might be intensely personal. In finally understanding how the theorem follows, for example, I might experience a deep pleasure. It might make me seek understanding in other parts of mathematics, in other intellectual spheres, or in other non-intellectual parts of my life. Or, quite humbly, perhaps all that happens is that I never forget what it felt like to understand. Whatever. Something happens to my psychic economy when I understand. This might be clearly manifest in moral choices I make, or it might not, in existential choices or not. But the change in my outlook is something significant to me, because it is my outlook; it’s something I live with my whole life; it’s never not there, never not coloring my experience.
The same might go, mutatis mutandis, for the musical and the moral case. Even if it makes no moral difference if one fails to appreciate reason-giving force, it might make an existential, personal difference. An impoverished inner life is something to be mourned, even if not morally assessed.
Thanks, Sarah and PEAsoup for doing this.
I was wondering if Sarah’s concerns with reflective equilibrium can be addressed by giving it a more social turn. I have in mind ‘Dialectical Equilibrium” that Brink proposes in many of his later works. Reflective equilibrium is traditionally understood as an individualistic enterprise and hence the issue of ‘perverse’ judgments that you raise are significant concerns. However, if we include interpersonal deliberations as part of the coherence seeking operations perhaps that worry can be tackled. I see a lot of similarities with Cornell realists’ account of moral knowledge in how you see parity between moral and empirical knowledge.