Herewith our forum hosting the lost 2020 Pacific APA session of the author-meets-critics session on Al Mele’s book Manipulated Agents: A Window to Moral Responsibility. Al’s precis is here. Click the following links for the criticisms: Carolina Sartorio, and Gunnar Bjornsson. (And below are Carolina’s and Gunnar’s abstracts of their remarks.) Finally, Al’s response to Carolina and Gunnar is here. Please join in on the discussion, even if you haven’t read the book!

Carolina Sartorio: I discuss the implications that Al’s book has for compatibilist views of free will. I make two comments. The first is a friendly remark on the historical condition of responsibility that the book advances. Given that this is a purely negative condition, there is the worry that any condition of that kind will fail to be illuminating, or that it will fail to capture the true nature of freedom. I suggest a way in which such a worry could potentially be addressed, by drawing an analogy with personal identity and views of that concept that incorporate a “closest continuer” condition. My second comment is more critical, in that it questions the reliability of the specific intuitions about cases on which the view relies. I suggest that those intuitions could be tainted by the fact that the cases involve agential manipulation (instead of natural influences), and temporary (not permanent) character transformations.

 

Gunnar Bjornsson: Al Mele’s Manipulated Agents: A Window to Moral Responsibility (OUP 2019) is an extraordinarily careful and clear little book. A central recurring element is the use of examples of radical value reversals due to manipulation. In this commentary, I discuss the relevance of these examples to a simple quality of will account of blameworthiness without explicit historical conditions. Such an account, I suggest, can fairly straightforwardly explain how value reversals might mitigate blameworthiness. But I also suggest that the intuition that they completely remove blameworthiness should instead be explained away.

16 Replies to “Mele Melee: Al-Mele-Meets-Critics Sartorio, Bjornsson(Lost Pacific APA 2020)

  1. Hi Al, hi Gunnar!
    Thanks for the replies, Al. It’s interesting for me to think about how our intuitions about Chuck’s responsibility would vary depending on what happens in the future. On the “unreversed” reversal cases: I agree that if Chuck gets to reflect about his values over time and comes up with good reasons to identify with them, in that case we’d be more inclined to find him praiseworthy. But the case I was interested in was one where basically nothing changes, except for the passage of time… Still, it seemed to me, we would be more inclined to praise him, say, a year down the road. At the same time, on reflection, I think it’s clear that the mere passage of time cannot make a difference to his praiseworthiness. If so, this gives us at least some reason to doubt the initial judgment about the original case (the “reversed” reversal case).

    Also, it occurs to me that another way in which time could be affecting our intuitions concerns how sudden or gradual we imagine the changes to be. Take the blind forces examples, and imagine that the electromagnetic field/brain tumor/lightning strike, etc. results in very slow, gradual changes in personality (but in the same kinds of changes at the end of the day, and in changes that don’t result in any deep reflection by the agent, etc.). Isn’t the intuition that Chuck is not responsible even less clear in these variants with gradual changes? But, again, isn’t it clear that this shouldn’t matter?

  2. Thanks, Carolina. On your first point, I imagine someone having the following internal dialogue.
    A. Chuck isn’t morally responsible for is good deeds on day 1. But I reckon that, after a year or so, if Chuck keeps up the good work, he deserves some praise for his good deeds.
    B. But suppose that nothing changes except for the passage of time. Chuck doesn’t do anything to take ownership of the implanted values, e.g.
    A. Ah, right. I was assuming that Chuck would learn that his life went better for him with his new values and identify with them for that reason or something of the sort.
    B. So reject that assumption. And assume that the only difference between Chuck on his first day with the new values and Chuck one year later is the passage of time. What then?
    A. That’s not easy for me to picture, but I’m trying. OK. With the new assumption in place, I reckon that Chuck’s moral responsibility for his good deeds on day 366 is just the same as his moral responsibility for his good deeds on day 1 – that is, zero.

    That ending strikes me as easier to accept than the following one:

    A. That’s not easy for me to picture, but I’m trying. OK. With the new assumption in place, I reckon that Chuck’s moral responsibility for his good deeds on day 366 is just the same as his moral responsibility for his good deeds on day 1. And on day 366, even with the new assumption in place, I reckon that Chuck is morally responsible for his good deeds. So now I judge that he’s morally responsible for his good deeds on day 1.

    Of course, to explore this properly we’d have to look into what it means for basically nothing to change except for the passage of time. It wouldn’t be fair for me to put Chuck in suspended animation for 364 days, for example.

    Now, for your second point. A very important feature of my radical reversal cases (from my point of view) is that values are removed and generated in ways that (in my shorthand) “bypass the agent’s capacities for control over his mental life.” Having the reversal happen overnight while the agent sleeps or all at once makes this feature salient. If we slow the reversal down significantly, then (depending on other details) people might tacitly assume that these capacities are engaged to some extent. (And if these capacities are engaged to some extent, the agent might have some moral responsibility for what happens.) The speed of the process – just by itself – doesn’t matter. For example, if a lighting strike puts Chuck in a coma for a month and slowly reverses his values over that time, there’s no difference, with respect to moral responsibility, between this Chuck on his first day out of the coma and a Chuck for whom the value reversal is lightning quick.

  3. Hi Al, Carolina, and Gunnar! Thanks for the excellent discussion. I hope several more people chime in.

    My view on moral responsibility and agents’ histories has shifted since I first read Al’s earlier books (Autonomous Agents and Free Will and Luck). I used to be convinced (especially by cases like One Good Day) that moral responsibility is an essentially historical concept. I’ve landed on a view closer to Gunnar’s, according to which a person’s degree of responsibility may depend on their history but the fact of a person’s responsibility (i.e., whether they are responsible) doesn’t depend on their history. And here’s the core reason for my change of view: manipulated agents (who nevertheless satisfy time-slice conditions on responsibility) act from characters over which they had no control, but so too do “little agents”—think of children performing the first actions for which they are responsible—whom believers in responsibility must admit are responsible (in order for responsibility to “get off the ground”).

    Now, historicists may admit that manipulated agents and little agents do not differ with respect to their constitutive luck, but they will add that there’s more to the story. After all, little agents may satisfy their historical conditions on responsibility, whereas manipulated agents won’t. Fair enough. My lingering question for Al and other historicists, though, is this: what would explain a manipulated agent’s lack of responsibility if it isn’t their constitutive luck (lack of control over the character from which they act)? (It can be anything about the involvement of another agent, so long as we think there’s no difference between that sort of manipulation and manipulation by non-agential sources, like brain tumors. What else might it be, or perhaps we shouldn’t seek an explanation here?)

    To tie this back to the present discussion, consider Carolina’s point about the time following manipulation. Supposing that the manipulators in One Good Day were to leave the implanted values in Chuck, I think we’d all agree that Chuck can be responsible for some of his good behavior a year after his manipulation. But if he isn’t responsible for the good he does right after manipulation, when does he become responsible, and how? In my view, we should think of Chuck like we think of little agents, and so we should say that he can be (a little bit) responsible right away but that his degree of responsibility may increase over time, as he has further opportunities to reflect on and modify the character that he initially had no control over.

  4. Thanks for the replies, Al! (Hi Carolina!)

    I share your sense that the additions you propose to the case of Billy strengthens the intuition that Billy isn’t responsible. I am less sure that they change the dialectical situation. On neither of our views do these additions affect whether Billy is responsible or not. The way you see it, he wasn’t responsible in my original version: what the additional features do is merely to focus our attention on what took his responsibility away. The way I see it, the additions make the value reversal more explanatorily salient by highlighting the contrast between what Billy had worked for and what he nevertheless did. This provides an even stronger prompt for an explanatory perspective which takes the manipulation rather than Billy’s quality of will that day as the independent variable from which other things follow. What I’ve argued for elsewhere is that this shift of perspective makes for something akin to a perceptual illusion: though the facts grounding Billy’s responsibility is in front of our eyes, we frame them in the wrong way. But I won’t repeat the arguments I’ve given elsewhere for this view. Instead, I want to focus on two issues that you helpfully raise towards the end of your reply.

    The first concerns the relevance of Billy’s ability to properly take into account and act on the needs of his children. Here you suggest that given the manipulation, this ability is not enough ground Billy’s blameworthiness for not doing so: “Regarding treating the children as he should, Billy has been very seriously handicapped by the value reversal – so handicapped that, in my view, he does not deserve to be blamed for his bad conduct.” You also offer an analogy: highly skilled major league pitcher Billy, who gets his pitching skills erased and replaced with those of a little league pitcher, Willy. Even if Billy can throw a strike with his new skill set – Willy occasionally does – he cannot be blamed for failing. Though I can be pulled into seeing Billy the value-reversed father and Billy the skill-reversed pitcher as not deserving blame for what they do after the reversal, I remain torn.

    The problem is this: if the handicap undermines Billy’s relevant abilities enough, then it also undermines the demands that he is operating under at the time, for entirely non-historical reasons. Moreover, if the operative demands are weakened, then Billy fell less short than he would have if he had done the same with his pre-reversal abilities, and he is less to blame. And if they are weakened enough, he might not have fallen short at all. So to make the case work, we need to make sure to preserve enough ability not to completely undermine demands for non-historical reasons. But if we do, it just does not seem clearly right that Billy has been handicapped enough by the value reversal not to deserve any blame for the failures that results.

    To pump intuitions: Suppose that Billy the pitcher recognizes what had been done to him, but also recognizes that he has enough of an ability to throw a strike for it to be a valid demand on him that he be responsive to the circumstances so as to throw a strike. Then if he fails, is it clear that he can’t properly blame himself for missing? Not to me at least. Similarly, suppose that Billy the father has enough of an ability at the time to recognize his children’s needs and their moral importance and respond properly to them to ground demands that he so responds. Then is it clear that we can’t blame him for failing to so respond, just because of the prior value reversal? As far as my intuitions go, I remain torn.

    Since I remain torn, I need guidance from independent arguments, not relying on intuitions about cases like these. (The arguments I’ve proposed suggest that blame might be fitting in both cases.)

    At the end of your reply, you ask what the consequences would be for the sort of Strawsonian quality of will view I defend if agents that have been subject to radical value reversals are not responsible for what they do. (In brief, the view says that agents are blameworthy for things that are bad and explained in normal ways by their will falling short of what morality demands.) Here I agree that one could go with the possibility you mention: one could take these cases as reasons to think that the relevant demands on an agent’s will are themselves subject to negative historical constraints. But I don’t think that we should go there: Demands on the will are not subject to such constraints. If Billy finds himself with new values and knows that he can respond adequately to available moral reasons, the mere fact that people messed with his head before does not relieve him of the demand to so respond. If Billy really isn’t to blame for failing to respond, then, the quality of will view would have to be amended. So I’m inclined to think that radical value reversal cases constitute real threats to the view.

  5. Hi everyone. Hope you’re all well. Some opinions, for what they’re worth:

    Gunnar says: “Personally, when focusing on the manipulation and the fact that what Billy did was not previously an option for him, I do have the sense that Billy isn’t to blame for what he did. But when I focus on the fact that it remained an option for him not to leave the children at home, it also seems that he is to blame, to some extent. So I’m torn.”

    I feel similarly, and always have. But despite feeling torn, Taylor’s arguments (including his question posed above) convince me to side with my anti-historicist intuitions.

  6. Oh, and I’ll also add two other thoughts.

    To answer a question Al asks in his responses, I’m moderately confident in my judgment that Billy is blameworthy.

    Like Gunnar, I agree that the details Al adds to the story about Billy hike up, as Al puts it, “confidence that it would be unfair to blame Billy for his deed.” But why? Gunnar offers some possible explanations. Here’s another that I’m keen on. Because Billy has been so radically altered after all his hard work to become a better father, we rightly see him as a victim, and can be hard sometimes to blame people we see as victims. (Think about what happens to some people’s judgments when they learn the gory details of Robert Harris’s childhood.)

  7. Hi Taylor and Justin, hope you’re doing well!
    Just to get clear on the ahistorical views you’re proposing: the ahistorical view only commits you to claiming that the victims of Al’s reversed reversal manipulation cases are responsible for what they do when they act. But this is consistent with claiming that it might still be inappropriate to blame/praise them after the reversal, that is to say, once their values go back to what they were before. In my view, this makes for a better version of the ahistorical view than one without that qualification. What do you think?

  8. Hi, everyone. I’m back. I’ll start by thanking Taylor, who raises a pair of interesting questions. Before I answer, we should let everyone else know about our discussion of these issues in Philosophical Studies – Taylor’s “Manipulation and Constitutive Luck” and my reply, “Moral Responsibility and Manipulation: On a Novel Argument Against Historicism.” I reread the latter in hopes of finding short replies to Taylor’s questions. But my replies there totaled about 5000 words! So I’ll take some big shortcuts here.

    Question 1. “What would explain a manipulated agent’s lack of responsibility if it isn’t their constitutive luck (lack of control over the character from which they act)?”

    Here, Taylor highlights “little agents” – “children performing the first actions for which they are responsible.” I told a story about such an agent in Free Will and Luck (2006, pp. 129-32). It was about a four-year-old boy who has been scolded many times for snatching toys from his younger sister’s hands and is thinking about doing it again. The boy, Tony, gives some thought to pros and cons and decides not to grab the toy.

    How similar is Tony’s story to a story based on One Good Day about Chuck’s first good action on that day? Chuck’s manipulation rendered him constitutively lucky regarding his new values, just as Tony is constitutively lucky regarding his “character.” Taylor (in the paper I mentioned) would seem to hold that Chuck’s constitutive luck is, at bottom, what does the responsibility-blocking work for externalists like me. He seems to believe that, in manipulation stories like this, the alleged responsibility-blocking buck must stop with constitutive luck. But it would not be at all surprising if an externalist were to believe that it also matters for moral responsibility how the agent’s constitutively lucky condition came to be – a historical matter. It may be that the difference in the causal history of little Tony’s and Chuck’s constitutive luck (shortly after his manipulation) supports a difference in verdicts about moral responsibility for actions. And it may be that the historical difference supports a significant difference in what it takes for Tony to perform his first morally responsible action and what it takes for Chuck to perform his first post-manipulation good deed for which he is morally responsible.

    In Manipulated Agents, on the basis of my assessment of a range of cases, I endorsed the following claim:

    NFMR. An agent does not freely A and is not morally responsible for A-ing if the following is true: (1) for years and until manipulators got their hands on him, his system of values was such as to preclude his acquiring even a desire to perform an action of type A, much less an intention to perform an action of that type; (2) he was morally responsible for having a long-standing system of values with that property; (3) by means of very recent manipulation to which he did not consent and for which he is not morally responsible, his system of values was suddenly and radically transformed in such a way as to render A-ing attractive to him during t; and (4) the transformation ensures either (a) that although he is able during t intentionally to do otherwise than A during t, the only values that contribute to that ability are products of the very recent manipulation and are radically unlike any of his erased values (in content or in strength) or (b) that, owing to his new values, he has at least a Luther-style inability during t intentionally to do otherwise than A during t. (pp. 66-67)

    The technical expression “Luther-style ‘inability’” flags roughly the sort of (alleged) inability Daniel Dennett gestured at in his well-known discussion of Martin Luther’s remark that he could not do otherwise than register his protest (Dennett 1984, p. 183; for discussion, see Mele 2019, pp. 66-67). Time t is a stretch of time shortly after the manipulation.

    Obviously, if NFMR is true, then a necessary condition of an agent’s being morally responsible for A-ing is that he does not satisfy conditions 1-4. And if that is so, historicism is true; this necessary condition has historical elements.

    Notice that if NFMR is true, then something that can be said in favor of the judgment that little Tony is morally responsible for his decision not to snatch the toy cannot be said in favor of the judgment that Chuck is morally responsible for his first good deed. Tony does not satisfy the quartet of conditions specified in NFMR. That fact counts in favor of Tony’s being morally responsible for his deed, if NFMR is true; it gets him over one hurdle. And, of course, Chuck, unlike Tony, does satisfy this quartet of conditions; he does not get over this hurdle.

    Suppose we agree to read (1) “What explains Chuck’s not being morally responsible for his first good deed in One Good Day is his lack of control over the character from which he acts then” as entailing (2) “No agent who lacks control over the character from which he acts is morally responsible for his action.” Then I reject (1) and say that although what explains Chuck’s lack of moral responsibility for that deed includes his lack of control over the character from which he acts then, there is more to the explanation, and I point to NFMR for guidance about what more is involved.

    Question 2. If Chuck “isn’t responsible for the good he does right after manipulation, when does he become responsible, and how?”

    First, I should say that in some versions of the story in which Chuck persists in his good behavior for many years (and then dies), I don’t see him as morally responsible for any of his good deeds. Suppose, for example, that at the end of every day, Chuck’s memory of that day is permanently erased. He wakes up the next say and performs lots of good deeds – and then, that night, his memory of that day is permanently erased. This goes on for years. Just as I see Chuck as lacking moral responsibility for his good deeds on day 1, I see him as lacking moral responsibility for his good deeds on day 10,000. One thing that this thought experiment illustrates is that there is more to a manipulated agent in a radical reversal case like One Good Day becoming morally responsible for the performance of value-driven good deeds than his performing such deeds over a stretch of time that you can make as long as you like.

    So how does the memory-erasing trick affect an agents prospects of becoming morally responsible for deeds of a sort that he was initially manipulated into performing? Well, one thing the trick does is to prevent the agent from learning from the consequence of his behavior what is good for him and what isn’t. So, we might suspect that part of what is involved in such an agent’s becoming morally responsible for such deeds is such learning. As I mentioned in my response to Carolina’s commentary, I spun a manipulation story that features such learning in Autonomous Agents (1995, 169-72, 175-76 n. 30), and my reply to Taylor is getting so long that I won’t fill in the details here. (Btw, Taylor, are you still playing racquetball?)

  9. Thanks, Gunnar. This helps, but I do wish I had a better memory of your papers on these issues. Your points about abilities are important. To make the case work, as you say, it must involve “enough ability not to completely undermine demands for non-historical reasons.” Of course, it won’t be easy to show where the threshold is for the needed level of ability. I’d ask internalists to tell me where they think the threshold is, and then I’d see what I can do. I’d try to find pairs of cases in which the values and ability levels are the same and yet one agent is directly morally responsible for A-ing and the other is not, owing to historical differences.

    Right, there are case-based arguments and case-independent arguments. One major task of Manipulated Agents is to persuade people who share my intuitions about cases that they shouldn’t worry that various arguments for internalism or for the claim that compatibilism entails internalism – arguments that I rebut – undermine those intuitions. I don’t have any non-case-based arguments for externalism. I’m not sure what such arguments would look like.

    Justin, it’s nice to hear from you. My reply to Taylor applies to your first comment. About your second comment: right, it can be hard to blame Billy, whom we see as a victim. One question is whether what makes this hard provides – or points to – some support for the verdict that Billy does not deserve to be blamed. Another is whether it is *only* Billy’s being a victim (at a certain level of victimhood) that makes it hard to blame him. Perhaps someone would like to weigh in on these questions.

    Taylor and Justin: What do you think about Carolina’s last suggestion?

  10. Carolina, I find that qualification attractive as well—thanks for bringing this up! Whether or not it is appropriate to hold someone morally responsible will depend, on my view, on a host of considerations, not simply on whether the person is in fact morally responsible for what they do. Take blaming, for example. (Praising may not be exactly parallel, given that it’s a positive response and may require less justification than a negative response.) Even if I’m blameworthy for X, it may be inappropriate for you to blame me for X if you frequently do stuff like X, or if my doing X is none of your business. Similarly, if you are blameworthy for wronging me and then sincerely have a change of heart, go through steps to make amends, etc., it seems that, unless perhaps what you did was very bad, it would be inappropriate for me to continue blaming you. So too, in a radical reversal case where the agent would not have done what they did had they not been manipulated and where they would no longer do that sort of thing after they’ve been reversed back to the way they were, I want to say that it may be inappropriate to blame them. Perhaps what’s driving my thinking on this is that, even if moral responsibility is not exclusively a forward-looking notion (I don’t think that it is), forward-looking considerations (such as potential moral improvement) do seem relevant to whether holding someone morally responsible is appropriate. And after a reversal has been reversed, it looks like the forward-looking reasons for blaming the manipulated agent disappear (though there are still plenty for the manipulators!). I want to think more about the details, as this raises all sorts of questions about the ethics of blame (and of praise, which I am inclined to think may be different). Curious to hear what you all think about this!

    Al, thanks for your responses, and I am still mulling over your Philosophical Studies paper. In your post here you say that I seem “to believe that, in manipulation stories like [One Good Day], the alleged responsibility-blocking buck must stop with constitutive luck. But it would not be at all surprising if an externalist were to believe that it also matters for moral responsibility how the agent’s constitutively lucky condition came to be – a historical matter. It may be that the difference in the causal history of little Tony’s and Chuck’s constitutive luck (shortly after his manipulation) supports a difference in verdicts about moral responsibility for actions.” If I understand correctly, the idea is that historicists (externalists) can explain why Tony is responsible and Chuck isn’t responsible by appealing to some historical necessary condition on responsibility like NFMR, and so while Chuck may be just as constitutively lucky as little Tony, the fact that Chuck’s constitutive luck was a result of manipulation makes a difference.

    If a manipulated agent’s failing to satisfy NFMR is supposed to explain a manipulated agent’s lack of responsibility (or at least why the manipulated agent isn’t responsible even though Tony is), though, I’m left wondering about what explains the truth of NFMR. I take it that the main (only?) support for historical conditions are from reflection on cases of manipulation. But to turn around and use NFMR to explain Chuck’s lack of responsibility seems like a mere restatement of the initial judgment (intuition) that manipulated agents aren’t responsible, and that’s not an explanation of what it is about the manipulation does the responsibility-blocking.

    Maybe I’m not seeing how NFMR is supposed to be doing explanatory work? Or maybe there’s not answer to my question about explanation? I’m very curious to hear what others think about this. As I see it, a nice feature of my view about constitutive luck doing the responsibility-blocking (more precisely, responsibility-mitigating) is that there is a tight connection between luck and control, such that the more lucky something is for a person, the less it is under their control. And since I (like many others) take control to be a (perhaps the) central condition on responsibility, agents’ constitutive luck can do the explanatory work that I am looking for, and it the explanation seems very natural to me.

  11. Carolina (and Al): I accept the added qualification.

    Al: I can see how the conditions listed in NFMR might (considerably) mitigate blameworthiness. But if the agent could have done the right thing for the right reasons, I’m inclined to think he’s still blameworthy. I grant, though, that I’m in the grip of a (partial) theory, and that that might be driving my intuitions here.

  12. Thanks, Justin. It’s good to see how honest we’re all being. I ran the x-phi studies reported in the appendix to get some evidence on how biased I might be myself. I was relieved that they went my way, but I don’t make much of them beyond being a check on myself.
    I’ll reply to Taylor shortly.

  13. Thanks again, Taylor. I’ll work my way up to the kind of thing I think you’re asking for. Start with little Tony. He has some capacities for control over his mental life, but, until today, they did not quite reach any threshold for freedom-level and moral-responsibility-level control. (For those listening in, here’s a reminder: Taylor and I are imagining that Tony is about to make the first decision for which he is morally responsible.) I say “any threshold” (rather than “the threshold”) because I’m inclined to believe that the threshold for moral responsibility for children doing childlike things is lower than that for adults doing adult things (see Free Will and Luck, 129-33 and Manipulated Agents, 31-34). So I can say that even if Chuck in One Good Day and little Tony (at the time at issue) are equally constitutively lucky and we don’t need to take historical facts into account, it may be that Tony is morally responsible for his decision not to snatch his little sister’s toy whereas Chuck is not morally responsible for his good deeds.

    Suppose you reply that there is only one threshold for moral responsibility, the one Tony crosses for the first time today. Does Chuck also cross that threshold? It is a threshold for moral responsibility after all. So the question whether Tony is morally responsible for any of his good deeds on that day immediately arises. Are there any differences between Tony and Chuck that can motivate asymmetrical answers about the two agents? Well, Chuck’s value system on that day – what drives his good behavior – was produced in a way that entirely bypassed his capacities for control over his mental life, and this is not true of Tony’s value system. The strength of Tony’s desire to snatch his sister’s toy and the strength of his desire to refrain from snatching it are both partial products of what he has learned from past experiences. So are his belief that it would be fun to snatch the toy and his belief that it would be wrong to snatch it. And so on. Considerations of this sort can be used in an attempted explanation of why it is that although Tony is morally responsible for his decision, Chuck is not morally responsible for his good deeds. People who intuit that Chuck is morally responsible for his good deeds probably won’t be impressed by this difference. But people with the opposite intuition can appeal to this difference (and the closely associated considerations packaged in NFMR) to explain why they make asymmetrical judgments about the two cases and even in an attempt to justify making asymmetrical judgments.

    Here’s a little tangent. The claim that because Tony and Chuck are equally constitutively lucky, they are equally morally responsible would be question-begging in the present context. But are they even equally constitutively lucky? You characterize constitutive luck as “lack of control over the character from which [one acts].” Even before Tony performed the first action for which he was morally responsible, he had some control over his character (he was capable of reflecting on and learning from the consequences of his behavior and modifying his behavior accordingly, e.g.) – it just wasn’t moral-responsibility-level control; the capacities at issue weren’t quite robust enough yet. And Chuck had no control at all over the character with which he awoke on One Good Day. But probably by “control” in your characterization of constitutive luck you mean moral-responsibility-level control.

  14. Oops! Probably you caught this (or didn’t notice the slip), but “Tony” was supposed to be “Chuck” in the following sentence: “So the question whether Tony is morally responsible for any of his good deeds on that day immediately arises”.

  15. Thanks for the response, Al. This is very helpful!

    If I can follow-up one more time, I want to bring up another type of agent that you sometimes discuss, namely “instant agents” (including “minutelings” and others), who are somehow or other created as psychological duplicates of typical adult human beings. Instant agents have no control over the value system that leads to their behavior, so they seem to me on a par with manipulated agents like Chuck with respect to their responsibility upon being created/manipulated. But you want to deny Chuck’s responsibility and yet (if I recall correctly) leave it open whether instant agents are responsible. Why this difference?

    I suspect you’ll say that instant agents weren’t around to have their capacities for control over their mental lives to be bypassed. In your explanation of the responsibility-relevant difference between Chuck and little Tony above, you say: “Chuck’s value system on that day – what drives his good behavior – was produced in a way that entirely bypassed his capacities for control over his mental life, and this is not true of Tony’s value system.” Since the creation of instant agents does not involve the bypassing of agents’ capacities for control over their mental life, perhaps that is a responsibility-relevant difference between them and manipulated agents like Chuck. Still, since instant agents are entirely at the mercy of their creators with respect to the value system that is produced in them, just as Chuck is entirely at the mercy of his manipulators, if we allow for instant agents to be responsible, then I have a hard time seeing how the fact that Chuck’s capacities were bypassed would render him not responsible.

    Here’s another way to put my point: the historicist wants to say that the sort of manipulation Chuck undergoes blocks responsibility-level control over his behavior (at least for what he does soon after the manipulation). But what does the manipulator take away from Chuck that is possessed by instant agents such that Chuck lacks responsibility-level control but instant agents have it?

  16. Thanks again, Taylor. As you say, I take no stand on whether minutelings (instant agents of a certain kind) are morally responsible for any actions they perform in their minute-long lives. Suppose that someone, Zeke, tells me that he has the intuition that Chuck is not morally responsible for his good deeds in One Good Day and no intuition about a particular minuteling. Minnie; he’s agnostic about whether Minnie is morally responsible for A-ing. My approach in situations like this is to think about what might account for the difference in Zeke’s attitudes, whether the attitudes at issue are consistent with each other, and what might be said for the attitudes. So what might account for Zeke’s agnosticism? One suggestion is that Zeke’s conception of moral responsibility doesn’t yield a verdict about Minnie one way or the other. And what might account for Zeke’s belief that Chuck isn’t morally responsible for his good deeds? Perhaps considerations of the sort I offered, including bypassing and the radical reversal of Chuck’s values. Together, this suggestion and these considerations may account for the difference in Zeke’s attitudes. Is the pair of attitudes consistent? Yes. And what might be said for the attitudes? Regarding the attitude that Chuck isn’t morally responsible for his good deeds, I’ve already made my case. So what about the agnosticism? To be brief about minutelings (for other readers), they are the combination of living adult bodies that lack consciousness until the psychological profile of an actual agent is implanted in them. When they’re “turned on,” they do something – e.g., shoot a political prisoner as a member of a firing line – and quickly die. Minutelings last only for a minute. Perhaps the details are weird enough that agnosticism is appropriate.

Comments are closed.