Consider absolute-level satisficing consequentialism:
ALSC: “There is a number, n, such that: An act is morally right iff either (i) it has a utility of at least n, or (ii) it maximizes utility” (Bradley 2006, 101).
ALSC, like many other versions of satisficing consequentialism, permits agents to go out of their way to prevent some good state of affairs from coming about and for absolutely no good reason. To illustrate, consider the following scenario:
Let U(x) = the overall utility that is produced if S does x, and let’s suppose that n = 100. Assume that a1 is the act of minding one’s own business while sitting on the couch watching TV, that a2 is the act of dissuading nine others from donating money to Oxfam, that a3 is the act of the act of dissuading six others from donating money to Oxfam, and that a4 is the act of giving someone who is easily convinced by Singer-type arguments a copy of “Famine, Affluence, and Morality,” and let’s suppose that someone is going to pay you handsomely if you do this and that, therefore, it is in your self-interest to give this person a copy of “Famine, Affluence, and Morality.” Also, suppose that dissuading others from donating money to Oxfam is not something that you enjoy doing and is, therefore, detrimental to your self-interest. So, in terms of your self-interest, the acts rank as follows: a4 > a1 > a3 > a2. Now, here, are the utilities of these acts along with what ALSC implies regarding their deontic statuses:
act; U(x); deontic status
a1; +180; permissible
a2; +90; impermissible
a3; +120; permissible
a4; +190; permissible
As this case illustrates, ALSC has implausible implications. Clearly, a3 should be impermissible, as there are alternatives that would be both morally and self-interestedly superior. It’s impermissible to make a sacrifice when this will make things not only morally worse, but also worse for you.
Two other versions of satisficing consequentialism avoid the implausible implication that a3 is permissible. The first is bird-in-hand satisficing consequentialism:
BIHSC: There is a number, n, such that: An act is morally right (i.e., permissible) iff (a) it has a utility that is at least as great as the utility of the “act” of doing nothing, and (b) either (i) it has a utility of at least n, or (ii) it maximizes utility. (I’m not sure how to precisely explicate the notion of ‘doing nothing’, but I want to hold that intentionally standing perfectly still so that dust will collect on some trigger mechanism and thereby kill someone is not doing nothing.)
The second is Cullity’s self-sacrificing absolute-level satisficing consequentialism:
CSSALSC: “There is a number, n, such that: An act, a, performed by agent S, is morally right iff either (i) a has a utility of at least n, and any better alternative is worse for S than a; or (ii) a maximizes utility” (Bradley 2006, 107).
Which is better? Well, on CSSALSC, but not on BIHSC, a4 is morally required, and this seems to me to be the right result, for the next best alternative, a1, involves forgoing a personal benefit when forgoing this benefit will only make things morally worse. (But, of course, this isn’t as bad as ALSC, which permits making a personal sacrifice when this will only make things morally worse.) So it seems to me that CSSALSC is the more plausible of the two. Now, here’s a question: When it comes to constructing a theory of rationality as opposed to a theory of morality, what would the analogues of CSSALSC and BIHSC look like, and which would be the more plausible?
Well, bird-in-hand satisficing rationality would, I think, look like this:
BIHSR: There is a number, n, such that: An act, x, performed by agent S, is rationally permissible iff (a) x has an S-utility that is at least as great as the S-utility of the act in which S does nothing and (b) either (i) x has an S-utility of at least n, or (ii) x maximizes S-utility. (An act’s S-utility is the amount of utility that S would have were S to perform that act.)
But is there any analogue of CSSALSC for a theory of rationality? The problem, of course, is that any better alternative in terms of S-utility cannot be worse for S. So it’s difficult to construct any obvious analogue. Is bird-in-hand satisficing, then, the most plausible version of rational satisficing? I should note that the idea of bird-in-hand satisficing is owed to Pat Greenspan’s “Sensible Satisficing?” (manuscript).
I’m slightly puzzled. Why think that ALSC has the implausible implication that it permits making things morally worse and worse for you. Sure it does have that consequence with the axiology you have given. But, why not think that that’s just a consequence of the axiology. Couldn’t we say that making things morally worse and worse for you itself has disvalue and thus makes the option less good? Any sort of ideal consequentialist view should be able to say this. And, if it does then that option shouldn’t come out as permissible.
I’m not sure you can get theories of rationality out of these views given that they only give ‘external’ criteria for the rationality of actions. Those conditions could be satisfied even when the agent acted accidentally in the way he was justified in believing that he ought not to. Maybe if something about expected utilities were added.
To your final question, maybe you could say that something like timeslice-utilities are analogical to S-utility. So you would get:
CSSALSR: “There is a number, n, such that: An act, a, performed by agent S, is rationally permissible iff either (i) a has a S-utility of at least n, and any better alternative is worse for S in short-term than a; or (ii) a maximizes utility”.
(ii) should be of course a maximizes S-utility.
Jussi,
As I’m employing the notion of ‘utility’ here, utility is just a measurement of the goodness of an outcome. (And S-utility is just a measurement of how good an outcome is for S.) So one outcome has more utility than another if and only if it is better than the other. Also, I’m assuming that, according to consequentialism, it would be morally bad for a worse outcome (an outcome with less utility) to obtain. So I don’t see how there could be any axiological fix here. Utility just include whatever the axiology holds as being good.
But if utility=goodness and S-utility=goodness for, then I don’t see the objection to satisficing consequentialism. It cannot be counter-intuitive if a view allows to bring about less goodness or less goodness for if there is no sense at all about what goodness in this abstract sense is. A satisficing view should be ok thus far as far as intuitions go because no substantial result is so far provided.
You will get a counter-intuitive implication if you you identified then goodness with aggregate well-being and goodness for with the agent’s well-being. But, that is a substantial axiological view a satisficer needn’t be committed to. If she says that it is bad in itself to reduce well-being or prevent well-being improvements, you cannot derive substantial counter-intuitive consequences from the basic satisficing view. Oh, I think I need to write something on this…
Doug – This is very interesting. I haven’t been thinking of bird-in-hand satisficing with reference to the consequentialist literature, or for that matter even as the basis for an overall theory of rationality, but I think Jussi is right that reference to short-term effects might result in an alternative to the principle you state. My approach in the paper I’m trying to fix up now is informal, but I consider appeal to upfront costs (what I think of as a “temporal hump” to get over) as a possible explanation of *why* it seems rational (i.e. rationally permissible) to satisfice in certain everyday cases. But I think this would undermine the defense of satisficing, as I understand it – that’s what the question mark in my title is meant to allow for – only if “standing pat” (BIHS) would *not* be rational if things were fine as they are but one could improve them without significant short-term difficulties (problems of readjustment and the like). A counterexample that occurred to me while working on my first draft was being offered a somewhat better seat on a plane after I’m already settled in my good-enough assigned seat. I might just say “I’m fine here” and leave it at that – without thinking of moving as taking particular effort, and (I’d say) without irrationality.
Where is Cullity’s view stated, incidentally? I was initially puzzled by your name for it, “*Self-sacrificing* absolute-level satisficing consequentialism.” It *disallows* certain kinds of self-sacrifice, doesn’t it – i.e. treats as morally impermissible less-than-optimal acts that would be worse for the agent?
I confess to some confusion. I thought the point, or one point, of satisficing was that it’s too hard to figure out what maximizes utility, so you just pick something “good enough.” But CSSALSC doesn’t allow you to do this, which makes it a rather odd satisficing theory. If “rationality” means something like “self-interested choosing” any analogue would have the same problem. Am I just barking up the wrong tree here?
“Which is better? Well, on CSSALSC, but not on BIHSC, a4 is morally required, and this seems to me to be the right result, for the next best alternative, a1, involves forgoing a personal benefit when forgoing this benefit will only make things morally worse.”
Isn’t this question-begging? If we’re really satisficers then it’s not true that a1 makes things morally worse than a4. Once we’ve got those 100 utils, it’s all the same from there, morally speaking.
Heath — What you’re thinking of is Simon’s original notion, as now in use in economics and psychology: that satisficing is justified by limitations in human cognitive capacity. But the philosophers’ notion is supposed to be a clearer departure from maximizing — which, after all, is supposed to weigh in the costs and uncertain outcome, etc., of further deliberation. In ethics satisficing apparently has been proposed as a way of avoiding the extreme demands of maximizing consequentialism. In reference to rationality, Michael Slote at one point was defending a fairly extreme version, taking satisficing as required rather than just permissible — required by the virtue of moderation — not just in cases where you’ve already reached a satisfactory level (what I call “bird-in-hand” cases), but across-the-board.
Doug — I’m puzzled by CSSALSC too, as it begins to sink in. Doesn’t it retain maximizing assumptions for self-interest? Why not just say that a is right (i.e. permissible) as long as it’s good enough overall and also good enough for the agent himself? I guess you would need another constant, m, for the threshold of adequacy from the agent’s standpoint, but I’ll let you work up the relevant principle. Allowing satisficing here too may not help your case against ALSC, but otherwise the element of satisficing you do allow seems ad hoc.
Everyone,
I’m very sorry to be AWOL from this discussion, but I’m dealing with a minor personal crisis at the moment. And depending on what happens with Gustav, I may dealing wiht a second one shortly. But please continue the discussion without me. Again, I apologize for posting and then being AWOL.
I hope Doug will rejoin us before too long, but meanwhile, maybe I can get some help from others in interpreting these principles. I take it that clause (ii) in ALSC, the permission to maximize — repeated as (b)(ii) in the further principles — is needed to handle cases where there *are* no good enough alternatives, i.e. no act meets the value-threshold n, as needed to satisfy (1). But the exception protecting self-interest in the later principles applies only to (1). So that helps me see why Cullity’s view in CSSALSC is referred to as “self-sacrificing”: it counts maximizing as permissible in cases where the agent sacrifices something thereby.
But it seems odd to limit permissible self-sacrifice to maximizing if you’re also permitting some cases of satisficing. Why prohibit the next-best alternative if this also involves self-sacrifice — even if a lot less self-sacrifice than the best, and even if it has almost as good results overall?
So again, I’m led to think of CSSALSC as an ad hoc way of handling certain cases that are problematic for a maximizer, rather than an alternative principle that might capture general intuitions favoring satisficing. The principle essentially retains maximizing as the default requirement — as opposed to just allowing an exception for maximizing in a case where nothing satisfices, i.e. the reason for adding (ii) in the first place — but allows an exception for certain cases of satisficing, severely limited by the insistence that they do at least maximize self-interest.
Can that be right?
In case it’s not obvious: by (1) in my first paragraph above, I meant (i).
Patricia – the view attributed to Cullity was raised by him in conversation. I think he was just coming up with a satisficing view that wasn’t susceptible to the counterexamples I was giving. I don’t think he actually believes it, or that it was supposed to capture some intuitively plausible idea. Robert Elliot does seem to endorse something like that view in his book Faking Nature.
You’re right that according to CSSALSC maximizing is the “default”, in the sense that there’s a condition an act must meet in order for a suboptimal act to be permitted. If there were no such condition, gratuitously making things worse (but not worse than the threshold) would be permitted. CSSALSC avoids this result by allowing suboptimal acts only when maximizing carries a personal cost. I still think it’s false. I give a counterexample in the paper Doug is quoting from.
In Doug’s absence, let me take a stab at reformulating CSSALSC as a satisficing principle, i.e. one that treats satisficing as more than a limited concession within a basically maximizing view.
This isn’t at all my usual way of thinking about the subject, so I’ll probably need help spelling a principle out correctly. I must say, too, that I’m not really comfortable with the reference to a specific number n for the thresholds of “good enough,” since I think that commonsense rational satisficing (which is all I mean to be defending in my own work) allows for a threshold that can be specified only vaguely. But for present purposes, let me accept that – and even compound it, by adding reference to a threshold of utility for S, the agent. It looks as though a genuinely satisficing version of CSSALSC might run as follows:
CSSALSC’: There are numbers n and m such that: An act, a, performed by agent S, is morally right iff either (i) a has a utility of at least n and an S-utility of at least m, or (ii) a maximizes utility.
The point it would make is that you should satisfice *subject to* not making things too bad for yourself – whereas you can maximize even at the cost of self-sacrifice.
I still find it odd, though, to allow self-sacrifice *only* for the maximal utility. Or to put it another way: to *dis*allow self-sacrifice for submaximal satisficing options.
In any case, the version for rationality, following Jussi’s CSSALSR, would run as follows:
CSSALSR’: There are numbers n and m such that: An act, a, performed by agent S, is morally right iff either (i) a has an S-utility of at least n and a short-term S-utility of at least m, or (ii) a maximizes utility.
But this would seem to be subject to the same problem. In informal terms: if some short-term self-sacrifice is entailed by all the available satisficing options, the only permitted option would be maximizing. But why – particularly in cases where the sacrifice is less for the satisficing than for the maximizing option?
A thought I have is that, instead of adding (ii) to handle the case where nothing is good enough, one might just stipulate that the best option always counts as good enough, i.e. that at least one option has utility of at least n. Then any S-utility condition would apply to the maximal option as well. However, that brings back my discomfort with n – it’s not supposed to be the same number for all choices, is it?
Ben — Thanks for your clarifications, which arrived after I posted the suggestions just above.
Independently, I had concluded that CSSALSC would do better than the satisficing principle I just suggested in that it *would* allow a second-best option that involves a bit less sacrifice for S. Essentially, if I’ve now got it right, the principle says that up to a point (as set by n) you can satisfice overall as needed to minimize self-sacrifice. I still seem to see a problem, though: it looks as though, if all the satisficing-but-nonmaximizing options involved the same amount of self-sacrifice, but less than the maximizing option, it would be permissible to choose any one of them, even the one with the least utility overall. Can that be right?
With bird-in-hand satisficing, though — which just allows “standing pat” when things already are good enough — you wouldn’t be *making* things worse, gratuitously or otherwise, by satisficing, but just declining to make them better. So it seems to yield an answer to the objection you just stated to ALSC — which Doug had put to me earlier in slightly different terms, as a claim that satisficing would permit “going out of your way” to choose the worse option.
Doug’s case in his post here is meant to present a counterexample to the BIHSC in similar terms, on the grounds that the principle would allow you to continue watching TV (a1) rather than earning money by convincing impressionable people (students?) to read Singer and promote more utility (a4). He grants that a1 as well as a2 exceeds the threshold n. However, he objects to the choice of a1 on the grounds that it “involves forgoing a personal benefit when forgoing this benefit will only make things morally worse.” But what I’d say is that, where *bird-in-hand* satisficing is in question, your choice wouldn’t *make* things worse but simply leave things as they are, even though they could be made better. By hypothesis, they’re good enough (though the current state of the world may make that assumption mplausible). So I don’t think the case really yields an objection to BIHSC. Do you?
I’m still not entirely convinced by the original argument against ALSC. I’ll buy that a3 is irrational, but even if (for the sake of argument) all immoral actions are irrational, surely not all irrational acts are morally impermissible.
I’m also puzzled by Doug’s case against ALSC. There’s nothing to explain why convincing others not to donate produces *any* positive utility. If looks as though the numbers Doug assigns in part reflect the utility already “in hand.” So a3 would be declared impermissible by ALSC on Doug’s reading because it doesn’t decrease utility *too much*, not because it produces any utility — and Doug would object that making things worse would seem to be impermissible regardless of how much.
Oops. I meant just above to say that a3 would be declared *permissible* by ALSC.
Further, I think that what satisficing is meant to allow is *not* making things worse than are, but just (in some cases) making them worse than they otherwise might be, as one does when choosing less than the best. How things are at present provides a benchmark for considering options, even in non-BIH cases. Is this built into the assignment of utilities, or should ALSC specify that n is no less than the utility of doing nothing?
A couple of minor observations:
1. ALSC is compatible with maximising consequentialism. Suppose the function U is bounded above, i.e. there’s a number r such that U(x) < r for all x. Then MC implies ALSC. 2. Speaking of amounts of utility 'produced' by actions is unnecessary and potentially confusing. For example, Patricia worries that Doug has not explained 'why convincing others not to donate produces *any* positive utility.' But Doug needn't have said that it does. All he needed to say was that the outcome of convincing others not to donate would be worse than that of sitting on the couch, which in turn would be worse than that of giving someone Singer's book, and so on. He needn't have expressed this by saying that the first action would produce less utility than the second and so on. He needn't have mentioned utility at all. We can state a view like ALSC without mentioning utility or numbers: There is an outcome x, such that: an act is morally right iff either (i) its outcome is at least as good as x, or (ii) its outcome is at least as good as that of every alternative. This is not equivalent to ALSC, but I think it's close enough.
Hi Campbell. Thanks for this comment.
About 2: I’d just as soon do without talk of utility and numbers, but I think my worry could be re-stated without reference to utility, as a question about why Doug thinks the alternative in question satisfices. He did need to say that it was “good enough” — I take it that the utility assignments were meant to make that concrete — not just that it was worse than sitting on the couch, etc. My worry, remember, was about his assumption that it had *positve* utility.
About 1: I need more explanation of your point here. Are you just saying that there are *some conditions* (or maybe I mean assumptions) under which ALSC *would* imply MC? I would take the claim that the two principles are compatible to mean more than this.
I’m waiting for Doug to get back at some point to clarify ALSC and CSSALSC, along with their variants for rationality. Meanwhile: would BIHSC/R, if re-stated in your terms, just say take outcome x to be the present one? The version I hold, which is just stated informally, is put forth just as a sufficient condition of rational choice — i.e. the choice to forgo a better option when you’re already at a stage that’s good enough — and omits (ii). But given the unlikelihood that you’ve already maximized (certainly in ethics!), a version re-stated in your terms *would* seem to be incompatible with MC/R, wouldn’t it?
I think the example can be changed to get around Patricia’s worry. We only need 2 alternatives to get the counterexample. Say Doug is sending some cash to Oxfam. He gives me an envelope with $200 in it to put in the mailbox. I can do what Doug says, resulting in 5 people’s lives being saved. Or I can change the address on the envelope to send it to some worse charity that will save only 4 lives with the money. As long as the utility of saving 4 people is greater than n, ALSC says I do nothing wrong by diverting the money. In this case there is no question, it seems, whether the diverting act brings about good consequences. It does. But it’s obviously wrong.
It seems to me that the view that what matters is whether the current state of the world is already “good enough” has got to be wrong, for basically the same reason. Say that the world is in pretty good shape right now. And if I stay put watching TV, it’s going to get better and better. But I have the power to screw things up so that it doesn’t get better and better. It stays just like it is, value-wise. That would be wrong. So I’m not seeing the bird-in-hand thing, the way Patricia describes it, as being an improvement. Doug’s version doesn’t have that problem, but I doubt that the notion of “doing nothing” makes sense, at least if it is supposed to count staying perfectly still so that dust collects on the detonator as doing something rather than nothing (as Doug suggests).
Ben — You did succeed in dispelling my worry about Doug’s case against ALSC.
However, could you clarify what you’re taking as the relevant difference between my version of BIHSR and Doug’s? I just speak in the message preceding of “the choice to forgo a better option,” and in general I tend to put the point in terms of options (what you have “in hand,” or choose to forgo) rather than acts done or omitted. In these terms, I’d be inclined to say that the option you already have in hand in a case like your TV case includes any future tendencies of the world, perhaps based on actions of others, to come out a certain way if you do nothing. So I don’t see a difference here between Doug’s position here and mine.
As to the detonator case: why do you take this to suggest that the notion of “doing nothing” makes no sense, as opposed to not supporting BIHSC/R? I imagine there are other problems with spelling out that concept clearly, but here it looks as though one *can* do nothing — the concept applies — but would be wrong to, at any rate with the intention of making things worse.
Your main point, I take it, is that it would be wrong to do nothing where that results in a worse state of affairs, even if unintended, and even if not so bad that it no longer satisfices. Offhand, though, letting a detonator go off wouldn’t seem to meet that last condition, at any rate in the smaller-scale cases I’m thinking of in connection with BIHSR; and similarly for “screwing things up” for the long-term fate of the world in your TV case. But I’ve been having trouble putting together a better case for your position as applied to rationality — e.g. one involving minor physical deterioration unless you do something to keep yourself in shape. The assumption has to be that you’ll still be in *good enough* shape if you continue to do nothing long-term; if your inaction were only for a limited period, in most cases you’d be making up for any short-term deterioration by what you do later. However, if only minor and/or short-term deterioration is in question, it’s not clear to me that it would be irrational (meaning rationally impermissible vs. nonideal) just to allow that. There are other problems with the other cases that occur to me. Can you suggest something that would meet these worries and tell against BIHSR?
Patricia-
“As to the detonator case: why do you take this to suggest that the notion of “doing nothing” makes no sense, as opposed to not supporting BIHSC/R? I imagine there are other problems with spelling out that concept clearly, but here it looks as though one *can* do nothing — the concept applies — but would be wrong to, at any rate with the intention of making things worse.”
I agree with you, but I had in mind Doug’s parenthetical claim in the initial post:
“I’m not sure how to precisely explicate the notion of ‘doing nothing’, but I want to hold that intentionally standing perfectly still so that dust will collect on some trigger mechanism and thereby kill someone is not doing nothing.”
This is the notion of “doing nothing” that I am skeptical about. To the extent that we have an intuitive sense of what counts as doing nothing, standing perfectly still (for whatever reason) seems like a case of doing nothing. If it’s not, I need something to be stipulated as a definition of “doing nothing,” otherwise I don’t understand the view.
I will have to give some more thought to your last question.
Patricia,
By ‘compatible’ I meant logically consistent. If MC is true and U is bounded above then ALSC is also true.
On the other point, I’m not sure what you mean by ‘positive utility’. This depends on what the zero of utility represents. (Compare: saying the temperature is positive is meaningless until you’ve said whether you’re using Celsius or Fahrenheit or some other scale.) You seem to say that having positive utility is the same as being ‘good enough’. But this can’t be what Doug intended. He assumed that n = 100, where n represents the threshold of being good enough. So actions with positive utility, e.g. his a2, might fail to be good enough.
Sorry if I’m pedantic about this. My basic point is that talk of utility is not helpful here. If the issue is whether or not a certain act is good enough, then it’s better to ask simply whether it is good enough, not whether it has positive utility or whether its utility is at least as great as n or anything like that.
I don’t see how Ben’s example (i.e. the one where Doug gives him $200 in an envelope) is a counterexample to ALSC. He says: ‘As long as the utility of saving 4 people is greater than n, ALSC says I do nothing wrong by diverting the money.’ But n need not be less than the utility of saving 4 people.
We can, however, give the following objection to ALSC, which is similar to Ben’s. Assume ALSC is true and, for simplicity, the function U has no maximum value, i.e. for every act x, there’s an act y such that U(y) > U(x). Now let A be any act. We can then consider a situation in which you must choose between only two possible acts, B and C, such that (i) U(A) = U(B) < U(C) and (ii) whether you choose B or C will make absolutely no difference to you (perhaps it's as simple as pushing one button or another). Plausibly, B is impermissible. So n > U(A). But A was chosen arbitrarily. So, for all acts x, n > U(x); i.e. n is an upper bound of U. Therefore, ALSC collapses into maximising consequentialism.
The rough idea is that wherever the proponent of ALSC sets the threshold n, we can give an counter-example like Ben’s which forces them to raise n. In order to avoid all counterexamples, they must raise n so high that ALSC becomes indistinguishable from MC.
Campbell — By “positive utility,” as attributed to a particular act, I had meant an increase in the overall utility that results specifically from performing it. Getting some people to do less good wouldn’t have positive utility in that sense, since it would subtract from the total good. In your message to me here, are you identifying the utility of a particular act with that of the overall outcome that results from it in combination with all the other factors in play?
Sorry to return so late, but my absence was unforeseen and unavoidable.
Jussi: You write: “It cannot be counter-intuitive if a view allows [one] to bring about less goodness or less goodness for if there is no sense at all about what goodness in this abstract sense is.” I disagree. If a view implies that it is morally permissible to go out of one’s way (e.g., to get off the couch even when one both prefers to remain on the couch and is better off remaining on the couch) to act so as to ensure that a worse state of affairs obtains will obtain, then that is a counter-intuitive implication. ALSC has this implication.
David: You write: “Once we’ve got those 100 utils, it’s all the same from there, morally speaking.” Not necessarily so. Some good enough outcomes are better than others. And some satisficing consequentialist will claim that it is morally better to bring about a better good enough outcome instead of a worse good enough outcome. Indeed, this is precisely why it is thought that satisficing consequentialism leaves room for the notion of supererogatory acts, acts that go above beyond what’s morally required. What’s morally required is that one brings about a good enough outcome. But if one makes a sacrifice in order to bring about an even better outcome than some alternative that’s merely good enough that would be supererogatory.
Pat: I do not endorse CSSALSC, and I agree that it is ad hoc. Also, I retract my objection to BIHSC, which is not to say that I endorse it either.
Everyone: Campbell is right. My use of the word ‘produce’ was misleading. In the tables, I’m just stating what the utilities of the various possible outcomes are. The outcomes with positive utility are those where the quantity of hedons (or whatever unit you want to use) exceeds the quantity of dolors.
Ben: Regarding doing nothing, it seems that, in ordinary conversation, when we say that someone did nothing, we don’t mean that he stayed rigidly still. To stay perfectly still so as to allow dust to settle on some trigger mechanism requires one to exert great physical and mental energy. It’s hard to do this. So, ‘by doing nothing’, I mean taking the path of least resistance: the path that involves the least amount of (physical and mental) exertion. In some cases, this will involve laying on the couch (although not laying perfectly still). In other cases, this will involve floating down river, letting the current take you where it will. That’s what I mean by ‘doing nothing’. Again, I’m not sure that I can give a more precise specification of what I mean by ‘doing nothing’, but you haven’t convinced me that this notion that I’m grasping at doesn’t make sense.