What is doing the work in manipulation cases, i.e. what is pushing the intuitions that the agents in these cases are not free/not responsible?
Consider the following version of a manipulation case:
The Uninterested/No Purpose Manipulation Case
There is a Goddess, Liana, a younger sister of the Goddess Diana, who—just like her sibling— lives in a deterministic universe and has special powers: she can manipulate atoms to create zygotes with the view of creating ideally self-controlled agents who will then perform certain specific actions that are close to her divine heart. However, unlike Diana, Liana has no interest in any particular event occurring 30 years later (or at any point in the future). Liana is bored though and she’s wondering whether it’d be fun to create a zygote just like her sister did. She cannot decide and so opts to roll the dice to settle the matter (after all, playing hazardous games adds excitement to her godly monotony). If it’s an even number, she will create a zygote, if it’s an odd number, she won’t. Either way, she will be satisfied with the results. Liana rolls two sixes—zygote it is then! But now Liana is at loss as to what zygote to create. But she has help—there’s a special roulette wheel which coincidentally happens to have as many numbers as there are ways in which she could combine atoms to produce a zygote. Undecisive and without any interest in creating any particular zygote (nor any interest in bringing about any particular event through the activity of this zygote once it develops into an ideal compatibilist agent), she spins the wheel. The roulette ball lands on 42 and Liana creates a corresponding zygote in Berry. That’s the start of Kearnie. Liana could find out what Kearnie’s life is going to be like and what actions he undertakes, 30, 40, 50 years later, etc., but she simply has no interest in finding out and never does.
We can change the above (Mele-like) case to make it more like the 1st and/or 2nd Pereboom’s case from his four-case argument. In each case, we would have a bunch of very clever scientists that live in a deterministic universe who are testing their mind-influencing gear and techniques on Mr. Spleen (which prove to be fully effective in the sense they can successfully direct one’s current mental states or “set up” someone) but who are either uninterested in any particular outcome in the sense that they do not use their techniques on Mr. Spleen with the view of Mr. Spleen carrying out any particular final act (these neuroscientists happen to like Ms. Peacock who happens to be funding this research, and thus have no wish to kill her).
In any case, both Liana and the neuroscientists do not have a horse in the race—not the same kind of horse as Diana or the four-case scientists. Liana does not create a Kearnie zygote so that it fulfills a particular purpose. Scientist do not radio-wave Mr. Spleen’s brain for the purpose of him going on a rampage—or any other specific purpose. They are merely testing the effectiveness of their methods.
Why do these considerations matter? One might argue that what is pushing the intuitions in the original manipulation cases is that Ernie and Mr. Green are doing the bidding for someone else— Diana and neuroscientists are using them as tools for their flights of fancy. This is perhaps more obvious in Ernie’s case—one plausible way to read the case is that Ernie would not even exist had Diana not had plans for this big event 30 years later. (However, it is also plausible to think that Mr
Green wouldn’t have been manipulated in the first place if the scientists were not interested in killing Ms. Peacock).
Now, there is a worry that if the manipulation cases involve or start with cases of manipulation with and for a purpose, it will be plausible to judge that the agents involved in these cases are not free or responsible (sorry, hard-liners). But, it is an illegitimate move to start with such cases because they are relevantly different than cases of uninterested/no purpose manipulation. Arguably, manipulation cases should be stripped of any bells and whistles so as to make sure it is not irrelevant factors driving our judgments. In the original manipulation cases, what may be driving the judgments is that the agents are being used for a specific purpose—and that they are made in a specific way (or even made at all) to serve the manipulator. If so, this would be, given the form of the manipulation arguments, tipping the scales in favour incompatibilism in an unwarranted way. However, if we formulate the manipulation cases without the purpose element, like we have done in the cases of Kearnie or Mr Spleen, this makes it more difficult for the manipulation arguments to go through. One reason for thinking this is that our intuitions about such cases may not straightforwardly support the no freedom/no responsibility judgements. Or, one might say that we don’t have clear grasp on these cases in the first place. Even more strongly, one might claim that our intuitions about such cases are in the relevant ways similar to our intuitions about ordinary cases of determinism. In any case, such scenarios would not be then fit to use in manipulation arguments. (Some related worries about the manipulation argument dialectic have been raised for instance by Stephen Kearns who too lists some variations on the manipulations cases in his paper “Aborting the Zygote Argument”—however, Kearns does not diagnose the problem in the same way. My concerns here are also different from those of Robyn Repko Waller in her “Threat of Effective Intentions…” even though the focus of her work is on the intentions of the manipulator.)
My point can also be put in the following way. Being manipulated for a purpose minimally strengthens the intuition that the agents in the manipulation arguments are unfree/not responsible. Intuitively, there is a difference in the freedom and/or responsibility of the agents in the uninterested/no purpose manipulation cases vs those cases in which the manipulator has a stake in the manipulated performing a very specific action. The former set of agents would be less free/responsible than the latter set. But such bolstering of the intuitions is undesirable. If that’s the case, this then further casts doubts on the “No Difference” premise both of the Zygote or the Four-Case argument.
Thanks for the great post, Marcela! I’m curious about how you’d compare your cases of uninterested manipulation with cases of “manipulation” where no agent is setting things up (the agent is determined by some natural force, or a brain tumor, say). Would you say that there’s no relevant difference (with respect to the moral responsibility of the determined agents) between these cases, or at least that there’s nothing to make our intuitions about these cases differ?
I’d also be interested to hear what you think of Carolina Sartorio’s error theory for the common intuition that purposefully manipulated agents are unfree (from the last chapter of her book, Causation and Free Will). On her view, there may be a dubious “dilution of responsibility” effect in the cases of purposeful manipulation, which is to say that we are mistakenly thinking that another agent’s involvement (or perhaps their purposes) detracts from (or undermines) the original agent’s responsibility. As Sartorio explains, however, ubiquitous as this psychological effect may be, arguably it is only an appearance that responsibility dilutes, and we might say the same for a parallel “dilution of control” effect in the purposeful manipulation cases. This is consistent with your view that our intuitions vary from cases of purposeful manipulation to cases of uninterested manipulation, but it would be a reason to be skeptical of these differing intuitions.
Hi Taylor, thank you for great questions!
Let’s take them in turn – I am not 100% wedded to any of the following but my thinking is something like this. Concerning cases of “interested” manipulation, “uninterested” manipulation, natural forces, brain tumors, etc., I think that intuitions will differ as these cases will have different salient features. I certainly think, as I argue above, that uninterested manipulation makes for a less intuitive case of non-freedom/non-responsibility than interested manipulation. Regarding cases of natural forces, illness, or brain tumors, our intuitions of non-freedom might weaken even further, given that the agential aspect of the earlier cases is missing entirely from these cases (and I think that this likely affects how we judge these cases). It’s an interesting exercise to see if it would be dialectically desirable, or perhaps even required, to substitute the standard cases of Mele/Pereboom type manipulation with cases like the ones that you suggest. (I come back to this later on.)
With regards to how differences in intuitions might translate to differences in correct ascriptions of responsibility (and relevant differences between the cases), I am tempted to say that the agent in cases of interested manipulation is less responsible than the agent in uninterested manipulation. (This is all on the assumption that free will is possible at all). However, I’m not sure whether an agent in an uninterested manipulation case is any different from an agent affected by natural forces, a brain tumor, etc. I’m leaning towards there being no relevant difference between (at least some of) these cases. The fact that these latter cases are not cases of manipulation at all might speak in favor of some relevant difference. That said, these cases do share important features with uninterested manipulation, e.g. all of them involve non-interested “forces” that determine agents’ actions. As Michael McKenna emphasizes, what choice to make between a hard-line or soft-line reply to manipulation arguments will depend on what cases we start with. Relating to what I say above, if we start with interested manipulation, I might say the right way to respond to the manipulation argument is a soft-line reply. If it’s uninterested manipulation, it may be the hard-line reply after all.
My general worry, that I tried to capture in the original post, is that manipulation arguments need to start off with uninterested manipulation so as not to undesirably skew intuitions. But this will not give defenders of manipulation arguments their desired result. What’s more, if there is no relevant difference between uninterested manipulation and natural forces, etc., then the manipulation arguments should start with those. I think that if you do that, however, then the argument abandons any desirable dialectic.
As for Carolina Sartorio’s error theory, I think it’s very plausible that there are some “dilution” effects (both of control and responsibility) that do not track correct ascriptions of control and responsibility, but I’m not sure that such effects apply to all the manipulation cases we have been discussing. Whether or not such dilutions are (merely) apparent, I think they are a problem for an incompatibilist either way. If the agents in interested cases of manipulation merely appear less responsible or free than those in uninterested cases of manipulation (because of some unwarranted dilutions), this suggests that an incompatibilist should not start off with such cases as they bring in something extra (in other words, it’s this special effect or bias that might lead us to judge the agents in these cases as less free or responsible than in other cases). But if we start off with the cases that are not affected by this merely apparent dilution, an incompatibilist might not achieve the desired effect (because the manipulation argument might not even get off the ground). However, if the dilution is not apparent but actual, starting with cases of interested manipulation won’t help either as one can stop the argument by rejecting the no-difference premise.