It is radical but my idea is that one submits to ranking houses (which could remain the existing journals). Every paper submitted will be published online and ranked. You may submit a paper only once. You fix it up in light of (presumably more careful and more numerous) referee reports, but then it is published with a numerical ranking. Advantages include 1) less refereeing overall and so, potentially, more careful refereeing from people who more closely specialize in the area. 2) much quicker time from submission to publication, eliminating pressure to choose where to submit on strategic grounds, 3) encourages people to finish papers before submitting them, rather than treating submissions as entering a lottery 4) as is the difference between just in and just out of a journal is enormous–this system allows one to get credit for “very close to making it into Phil Rev”. Call this the Wine Spectator Model.
However, a serious concern about this proposal is that one bad set of reports has a more dramatic effect on a paper on this scheme than under the status quo. As is, one can get a set of terrible referee reports and recover to still have the paper placed in a good journal. That, I acknowledge is a real issue with this proposal.
But I think people are not appreciating the scope of the problems that currently exist that need fixing. Many humans are spending a lot of time, for example, desk rejecting a lot of papers and all that would be eliminated. Many papers take years to come out and people are agonizing over whether they can afford to send the paper they love to a top journal, given the high rejection rates and the need for pubs before hitting the market. The strains on referees and those seeking referees is too high.
Some may also think an additional benefit is that it would be good for the profession if we produced less but better papers. This would certainly make reading the literature and refereeing more rewarding.
My biggest worry is that this assumes that there is a universal ranking scheme. Not unlike wine, philosophy papers can have virtues on a number of different dimensions, and there’s reasonable disagreement about how we should weight those virtues. (There’s a reason why winning one wine competition is uncorrelated with winning another, and it’s not that wine snobs are idiots)
Having a single ranking is going to implicitly shape the discipline in ways that encourage conservatism. I’m not an editor of a journal, so I don’t have an appreciation for the load on that end of things, but I take on more than my fair share of refereeing, and it’s really not all that bad. I’m more than happy to endorse changes to the journal system, but I don’t want changes that force unnecessary conformity, beyond the level we already have.
Maybe say more about the connection you see between a ranking system and conservatism? I don’t immediately see the connection.
Hi Dave,
This is an interesting proposal. I have two questions for now.
First, what are the identity conditions for a paper such that I “may submit a paper only once”? Is it non-branching? Sometimes what I would call two papers has a common ancestor? Sometimes, I abandon one thesis but use the arguments for the old thesis (suitably revised) for a new thesis. Is this the same paper or a different paper? Sometimes, I submit a short paper to Analysis and then I add and defend other theses and package it together with those defended in the rejected Analysis submission. It’s now three times longer than the Analysis submission. Is it the same paper? Sometimes, I combine the ideas from two submission into something that I submit to another journal. Is this the same paper as the two? How do the rankings get passed on from the two earlier submission to the new fused submission?
Second, could you say more about why I should think that there’s a significant problem. When I first entered graduate school in 1991, I remember my graduate professors lamenting that people are publishing too much, that philosophers are publishing too early, that the the philosophical literature isn’t as good as it was in the old days, that referees have to work too hard, etc. And every year I hear similar complaints. But are things really that different from how they were in 1991 such that we face a real problem? Is there anything more than subjective impressions of old-timers like you and me to support the contention that there’s a real problem and, if so, what exactly it is? My impression is that I do a lot more (a heck of a lot more) refereeing than I used to do, but I don’t know how much of this is due to some growing crisis in philosophy and how much this is due to my development in the profession.
Doug,
Good issues. On the first one, it is not yet clear to me that there are issues concerning the individuation of papers on my model that are not already issues on the current model. Were you thinking there was because it feels like your examples are problems today. On the second issue, data is hard to find here but I feel rather confident that there is a lot more grad submission going on than when I was in grad school. Back then we were told to not bother trying to publish. No one says that anymore.
On the first point, Doug, I thought it obvious that the norm is: “different title = different paper.”
The second point is a good one. It’s hard to assess this. We know that numbers of submissions are way up at many journals, that Nous and PPR regularly shut down admissions, and so forth. What we don’t know is where these additional submissions are coming from. Many are surely coming from grad students, but we don’t know percentages. Regardless, there is a massive and growing burden on editors, who must do much more work than they used to to sift the wheat from the chaff in the initial desk stage. Surely some of the papers that wouldn’t have made it through this process are now making it through, and so are being passed on to referees, who see greater mediocrity in the papers they ref. This purportedly produces lower quality published papers as well.
Or does it? I’m puzzled, myself, about how to assess the current state of “quality” of both publications and the many, many papers I ref each year. They do seem “more mediocre” on the whole to me. But I’ve gotten better as a philosopher over many years of work (I hope), and so I also see flaws in papers now that I wouldn’t have picked up on years ago. Papers I used to see as perfect I now see as deeply flawed. But this says more about me than about the quality level in the profession.
At any rate, although I fully appreciate the “burdens on editors and refs” worry, I’m not sure I can fully get behind the “lowered quality” worry.
Hi David,
Here’s what I have in mind in terms of conservatism. If you grant my premise that there are multiple dimensions along which papers might be judged, then surely some of those dimensions would be things like topic area, particular question taken on, methodology chosen, etc. But if we have a single numeric ranking scheme, that means that there is a single community judgment on how to collapse those multiple dimensions into a single ranking score. To do this, it must mean that the community would have to agree on what topics, methods and such are better than others. Since this is probably not going to come from a vote, it’s going to be put in place by some set of gatekeepers. Journal editors are already gatekeepers, but at least there are (to some extent) different house preferences for kinds of papers, and so there is room for a new area or new methods to grow. If anything, I’d like to see more and varied journals for this purpose.
The core problem in these kinds of discussions is that we demand that journals perform two very different tasks for the profession. First, they are there to curate and disseminate new ideas and arguments. Second, they are there to provide evidence in favor of hiring and promotion decisions. It’s very difficult to optimize for two tasks. We’re going to have to favor one over the other. But we should at least be clear which it is that we’re favoring. I prefer using journals mostly for the first task, and then something like citation metrics as an input into the second task, even though those are imperfect. But my general view is that it is better to filter less upfront, as I think type II errors are a lot more costly than type I errors. Whether people end up reading and building on ideas is a pretty good filter for whether the work is valuable.
Maybe I don’t understand your proposal. So, suppose that I submit a paper to one of the ranking houses: say, Ethics. I get back some critical referee reports with poor scores. Can I now fix it up and then submit it to another ranking house (say, Phil Review)? If so, how is this proposal helping to fix the putative problem? Won’t authors just submit to one ranking house after another until they get a ranking house that will give them good scores? If, by contrast, your proposal disallows authors to submit a given paper to more than one ranking house, then, given that I can presumably submit a different paper to a different ranking house (say, Phil Review), how do we determine whether the paper I want to submit to Phil Review is the same as the one I submitted to Ethics? We don’t currently have this problem, because one is allowed to submit a paper rejected from Ethics to Phil Review without any revision or changes whatsoever. Perhaps, you could then feel in the details of your proposal. What happens after I submit to Ethics and get low scores? Can I withdraw it? Can I have it published as is with its low scores? If I revise it, does it get rescored? How many times can I revise it? Can I submit to another ranking house if I don’t like the scores that I got?
I’m also confident that there is a lot more grad submission going on these days than when we were in grad school. That’s pretty easy to gather evidence for. But what I’m not getting is what the problem is. I guess you’re assuming that if people try to publish in graduate school, then there will be more submissions than if people waited until after grad school to try to publish — presumably because some of these people will fail to finish grad school. That seems plausible. But how much of a problem is this? How many more submissions does this result in? How many of these additional submissions go out to referees and aren’t not just desk rejected? What are the advantages of having people submit in graduate school and do the disadvantages outweigh these advantages? I’m just not seeing where anyone has given me very good evidence that there is a significant problem here, and, if so, what exactly the nature of the problem is. As far as I can tell there are a lot of potential advantages to having graduate students try to publish as well as a lot of potential disadvantages, but little evidence that there is some major problem overall. And your proposal is, by your own admission, quite radical with substantial risks given the worry that you identify. So, why even consider such a risky and radical proposal before we have a clear sense of whether there is a problem, how big it is, and what exactly the nature of the problem is? (I should say that I’m open to the possibility that there is a real problem here, but all the evidence that I currently have seem like the evidence that I get from parents who think that things are worse than the good old days.)
I like what Ryan is saying about the two different tasks. I feel as if many philosophers are on board with trying to assess quality of publications over quantity for purposes of both hiring and tenure/promotion, but that the main reason for resistance on that score is that administrators are quantity-fetishists, as they are no position to assess quality. I remember a case many years ago (Doug will too) when a department I was at wanted to hire a guy who wasn’t going to make tenure at a major university because he only had five publications, but the administration had a hard cut-off of six for tenure. The problem was that three of the person’s publications were amazing, among the most-cited and discussed articles in that area. But that didn’t matter to the administration. So that person wound up publishing a tiny article in Analysis that ameliorated the administration and blocked us from hiring that person.
Thanks, Shoemaker. I agree that submissions are way up at some journals. But are they correspondingly down at other journals? How much of the increased submissions are attributable to the pressure for graduate students to try to publish these days and how much is attributable to other factors: pressure to have more publications for tenure, pressure to try to keep up with the amount that others in the profession (e.g., Dale Dorsey, Mark Schroeder, and Dave Shoemaker) are publishing, increasing numbers of people in the profession (if that’s true), not enough growth in the number of journals relative to the growth in submissions, etc. I find it strange that people are proposing all these radical solutions to an unclear putative problem. If we don’t have good evidence for what exactly the problem is, how can we think that we can come up with sensible solutions to it.
Doug, that helps me see your first worry better. I was picturing, in ways I have not worked out, that submitting was quasi-committing to have the paper come out in that journal with their ranking. So withdrawal would be rare and only for demonstrated cause.
It takes a long time for papers to get reviewed and decisions made. Even if it is not long than it was before it is longer than is ideal. Part of this is due to difficulty in finding referees. The result is grad students get few shots at getting their papers published and have to work out where they can afford to send the paper, given their need to get something accepted.
So, the proposal, as I understand it now, lets you submit to only one ranking house and once submitted you cannot withdraw it except for demonstrated cause. So, if you make a mistake and submit a bad paper, your bad paper will haunt you for the rest of your career, being published with its terrible ranking for all to see. That’s bad enough, but here are some other concerns with this proposal. First, this proposal seems to disadvantage the worse off. The worse off are those who are new to the profession, those who didn’t go to a higher-ranked program, those who don’t have much money to travel, and those who don’t have colleagues with the time or expertise to comment on their work. These people will have to publish their work without having the opportunity to go through several rounds of getting high-quality feedback followed by revisions. They won’t have the same network of potential commentators and won’t have the opportunity to conference their papers as much. This isn’t a problem for me now, as I’m fairly well established and can get some of the best people working in the same areas to comment on my work either through professional connections with them or by conferencing my papers. But, when I was newly out, the only way that I could get such leading people to comment on my work is by submitting my work to excellent journals. Without this sort of feedback, I think that my ideas would have never gotten the chance to be developed properly and would have never been read by hardly anyone at all. Second, learning to package one’s ideas in the right way is a skill that is hard to learn except by having the opportunity to go through several rounds of submission, rejection, and revision. I worry, then, that those who initially have trouble packaging their ideas and/or those who are not taught in graduate school how to package their ideas will end up flunking out of the profession. There are many stories of people (including Mark Schroeder and myself) who had a lot of trouble publishing at first and so had many, many rejections before learning how to package their ideas in a way that would be well-received by reviewers. On your proposal, such people will publish some low-scored work that’s never read but by a handful of people, and they will be subsequently shut out of the profession. Third, your proposal will really encourage people to play it safe. I would never have worked on consequentializing, for most reviewers were very unsympathetic to this new project when I was starting out. Most of them just rejected the idea of an agent-relative consequentialist theory out of hand. It would have been better for me to try to work on something safe than try to break new ground. As it was, I was able to shop my papers around until I got some of the few sympathetic reviewers.
I like many things about this proposal. A modified version of it could be: several journals team up, and frankly concede that (say) A is more prestigious than B, and in turn than C, which essentially becomes a fixed ranking. You submit to them jointly: a team of reviewers looks at it, and gives it more feedback than you would have gotten from submitting to a single journal. If it’s judged exception, it gets published in A; very good, in B; etc.
Either this or David Sobel’s original proposal could be modified by splitting a general ranking off from a subject-appropriate category. Say, a submission could be published by D (on on “list” D), because that’s for, say, aesthetics (A, a general journal/list, could still include papers on aesthetics which are thought to be useful for generalists, while D covers those primarily useful for specialists, say). This avoids some problems of a one-time single-scale “quality” ranking (which could be based on bias, misunderstanding, perspectives which shift over time, etc.) but still retains some of the other benefits of the original proposal: getting more feedback on the initial submission, while saving everyone’s time because the process occurs just once.
I’m not as concerned as Doug about the possibility of a low ranking for an early paper. Seeing someone submit several papers might still show effort, as compared to someone who didn’t submit any. Obviously someone who submitted once and got it placed highly looks better than either, but that remains possible under this proposal. And someone who got one or more papers published at a low rank can still mull over the ideas, figure out how to present them better, and try again with a new approach.
I’m a little puzzled by David Shoemaker’s remark that ‘I thought it obvious that the norm is: “different title = different paper.”’ Certainly you can’t just make a new paper by changing the title of one and resubmitting (or submitting the same text elsewhere). The question of when paper A is different enough from B to be a “new” paper is fuzzy (I was actually recently accused of “self-plagiarism” because the first part of a paper on topics X+Y was thought to too closely resemble an article I published on X, even though I took care to have little sentence-level overlap; I think this was a little unfair, but granted that the reviewer had a point). But I don’t see that Sobel’s proposal would make this determination any more difficult.