It appears the Philosophical Gourmet Report is returning. The last installment was in 2014-5. We are now in the midst of a 3-year gap in rankings, which is the longest gap, I think, since the thing got going in the 90s. We may not have such a long gap again. So I got to thinking this might be a time to reflect on people’s experiences of doing without such a ranking for a while. What were the costs, if any? What were the benefits, if any? Or perhaps this was too short of a gap to serve as a useful test of life without rankings?

There are now many more surrogates for ranking than ever before. In large part as a result of pressure from the Report, most departments now provide detailed placement information. In addition most faculty at graduate programs list their CV and research interests. That, combined with a sense of what the top journals in the field are, and the availability of citation information, grad attrition information, etc. mean that people without rankings would be much less in the dark about where to go to grad school than I was back when dinosaurs roamed the plains and we lacked a widely consulted ranking of grad programs in philosophy in the 80s.

Given the availability of such, do we still need rankings? If so, how might they be made better and what do we need or want ranked? If not, what if anything is to be done given that they will exist. Dept’s that are ranked highly unsurprisingly tend to tout their ranking with the result that many of the top departments seem to tacitly ratify or endorse the rankings. This self-interested reason to tout favorable rankings among top depts would seem to exist whether the report is well done or not and whether or not a dept finds the rankings reliable. The new editors of the Report may be, especially at this moment, interested in and responsive to advice about how to make the Report better. What would be good advice for us to offer?

6 Replies to “The Return of Rankings

  1. The above is mainly designed just to stimulate discussion. But let me say a few more things, speaking strictly for myself. I’ve always thought it would be good to have rankings of some sort. Possibly this is partly the result of vividly recalling how at sea I was when I was choosing a grad school. The situation then, I think, was bad and compared to it the rankings we were given, whatever their flaws, in my view improved the situation. Whether we need rankings hinges on what is available without them and back before the Report, very little was available. In my view, at a minimum, for many years the Report provided a valuable service given what other resources were available.

    The overall rankings seem to me significantly more suspect than the specialty rankings. I am much more confident in my own opinion about the excellence of the work being done by people in my sub-fields than I am about my opinion outside of those areas. The latter opinion is significantly shaped by who I happen to know in those areas and what they tell me. I wonder about the possibility of just doing specialty rankings. Alex Gurerrero had a facebook thread a while back raising legitimate worries about how the Report was recently handling some sub-fields that I myself find concerning. I encourage him to share those concerns here if he likes. But I am here speaking not about the excellence of the Report as it exists but rather whether or not we ought to want some such report and, if so, what sort of shape we would like it to take. I assume that during this transition of editorship there may be room to alter the way the Report is done if better ideas for how to do it emerge.

  2. I think the rankings were an excellent thing when they started, for just the reasons David says. And I think things were worse for undergrads in the southern hemisphere than they were in America. Note that from Australia, even placement information isn’t that useful, because Australian undergrads have no idea what’s a good or bad placement in America. (Is getting a job at Reed, or at Pitt, a good job? How would a 21 year old Australian who has never heard of either place, and doesn’t even really know what a liberal arts college is, know?)

    The rankings may or may not be an intrinsically good thing now. But I think they must continue. (This is going to be a point Tom Hurka, I believe, made in a Daily Nous thread.) The alternative to doing the rankings is to have the current rankings stay around as anchoring people’s impressions indefinitely. That is very inegalitarian; it means departments can’t change their status.

    There is an interesting theoretical question about whether, had rankings never come into existence, whether it would be a good thing or bad thing to introduce them. I suspect the answer is still probably yes, but that’s more debatable. I think the bigger thing is that while it would be good for those of us now highly ranked to have the rankings fixed in amber, it wouldn’t be fair. And that’s what will happen with no new ranking.

  3. When I applied to grad schools, I relied on the advice of my teachers (especially Fred Dretske and Claudia Card). They knew me, and they knew programs “out there”. It’s just luck that I had folks like them to rely on, right? For those without this good luck, the additional information that programs provide strikes me as more valuable than rankings. As you pointed out elsewhere, David Sobel: “Yes, I went to a small liberal arts college (which are awesome by the way–totally send your kids there) and as wise in other ways as my teachers were, they were not well plugged into the then current landscape of grad programs.” Yes, the landscape changes, and not everyone keeps track of who is where, working on what, etc. The field is broader than ever. The department by department information is more detailed and helpful than a rank. Also, sometimes students are constrained by location, or cost, or something that isn’t ideal for finding the best possible match. Then, the rankings can make it seem like a poor degree, when it might be a good degree for their plans. (It has taken me a very long time to believe this.)

  4. I have yet to hear of a good reason for the rankings other than that they help prospective grad students decide where to go, and it has not yet been explained to my how the rankings put together by Carolyn Dicey Jennings aren’t vastly superior in this regard. It is only in a grad student’s interest to care about prestige insofar as that translates into a job, prestige without placement is pointless.

    I’m not far out of a department with a (top 20) Leiter ranking that is wildly out of step with its placement, so perhaps I am overly sensitive to these issues. But placement (maybe plus attrition) reveals so much about how good the department will be.

  5. When thinking about the rankings we have in philosophy, it’s worth asking what would we have w/o them. The Leiter rankings originally were an aid to prospective graduate students (and an excellent one in my opinion. But they also have been invaluable to me as a Dept chair. Administrators at my University want to know how our programs are rated, and if a chair can’t provide decently done rankings, then whatever exists will be used — and I do mean whatever, even those wretched national rankings that were produced a few years ago (forgot the name). We are much better off than most disciplines in the humanities. I can’t tell you the Times I have been able to refer to the Leiter rankings in ways that were useful to our program. They are useful in administrative settings in part because they reflect a quite stable consensus in our “analytical” tradition. Pointing to that consensus has also been very useful to me, esp. when talking to people who really have no idea what we do.

Leave a Reply

Your email address will not be published. Required fields are marked *