I am going to presuppose the need for philosophy rankings, the need for an alternative to a ranking run (even in part) by Leiter, and that Leiter will not step down. Whether any of these presupposions is true is controversial, but I seek to think together with you about what we should think and do if those assumptions were true.

A key issue is how democratic the Alternative should be. At one extreme, there might just be a group of philosophers who announced that they would be willing to serve and start up something—themselves selecting a set of advisors. At the other extreme, we might somehow, all of us philosophers, elect the group that oversees the rankings.

Of course these two models are not mutually exclusive. Both might be tried simultaneously. But I am wondering if we should prefer one sort of system to the other. Perhaps one option is more feasible, likely, or there are terrible problems for one of these options that makes the other option better.

Among the problems for the voting option would be 1) deciding who gets to vote and who gets to decide who gets to vote (obviously limiting voters to APA members is unacceptable if this is meant to be a international ranking of all of English speaking philosophy, but may adjuncts or graduate students vote, may undergraduate majors, may philosophers in any country, who decides if a vote is valid or not?), and 2) worries about strategic voting and self-interested voting.

Among the problems for the non-voting option is that over-opinionated, power-hungry philosophers are especially likely to step up and self-nominate. But perhaps this worry is nearly as bad in the more democratic option since we still need volunteers to vote for. Additionally, such a non-democratic scheme might badly misrepresent the philosophers it purports to serve and be unbalanced in whose fields and concerns get represented. One might think, as Alan Richardson put it over at Daily Nous, “the time has come for the profession to make sure that the conduct of the rankings of departments reflects the interests of the profession as a whole.” One might also see the criticisms by Zachary Ernst of the reliability of the non-democratic options here.

More generally, folks, let’s think together about what an Alternative should look like, assuming it should exist. What would be the other crucial issues to address? One might wonder if we need both overall dept rankings and specialty rankings or just one or the other, for example.

26 Replies to “What Should an Alternative to the PGR Look Like?

  1. Good idea, David.
    I don’t know what to do, but I’m pretty sure philosophy shouldn’t proceed without first considering what works and what doesn’t in other academic fields. The problem of determining which Ph.D. program to attend or to apply to is not unique to prospective philosophy students. It may turn out that no existing model works well. But we should investigate this first, especially if we are skeptical about armchair politics/epistemology.
    I imagine that there are people in Higher Education who have studied this very problem. It might be worth finding and reading their work.

  2. Good point Eric. It would be great if someone who knows how some of the other disciplines solve this problem would report on this.
    I can imagine thinking that the democratic vs. non-democratic question is hard to think about in the abstract like this. If someone who everyone trusts and is wise popped up to do it that might be the best option. But at this point we don’t know who might pop up and so the harder, more abstract question is our question. I think some worry that equal representation might be bad because there are areas of philosophy in which not a lot of good work gets done and equal representation or equal votes would gives these areas or these philosophers equal say to those that work in the more important areas where better work is done. Such thoughts would be a more direct challenge to the democratic model. What do folks make of such worries?

  3. It seems like voting based rankings will suffer from rational cheating worries. Why not go for broke with some kind of outcomes based ranking system — job placement data, publications of faculty, etc. the usual outcomes suspects. Wouldn’t it be nice to take out the “perception of quality” effect on rankings? Genuinely wondering. I’m sure others have thought about this type of alternative.

  4. Let me report two out of the box ideas I recently heard on Facebook that I thought worth mentioning. The first would be to have a database with lots of info, including the sort of objective info that Manyul mentions above, and then give the user the option to look at how the depts stack up against each other on the dimensions of the data that she selects. On such a picture one might include a perception of quality option but need not commensurate that with the other data to produce a master ranking. The second out of the box option I have heard was to rank individuals rather than depts, with dept rankings perhaps just trivially determined by the former. I am not sure if the latter is best understood as a reductio of the whole ranking enterprise or an interesting direction to consider.

  5. The second “out of the box” option is interesting. Building on it, I suggest that individuals could be ranked both with respect to their scholarly output and also with respect to their “moral fiber,” their quality as human beings–information that surely is relevant to prospective grad students. You could then combine these two rankings together to generate an overall individual ranking, weighted perhaps at 2/3 for research and 1/3 for character. So, character counts, yes, but we aren’t fanatics about it. An advantage of this proposal is that it would incentivize better behavior by philosophers. Why should you try to become a better person? Because if you don’t, your individual/departmental ranking will be jeopardized.

  6. I’ll out myself here as the person who suggested on FB that we should rank (or otherwise publicly evaluate) the scholarly output of individual philosophers in PhD-granting departments. I am not sure whether this ought to be done; but it seems to me that it might be better to rank individuals than to rank whole departments.
    David S’s idea of letting dept rankings be trivially determined by the rankings of individuals in them is intriguing, although I expect it’d be difficult to create a non-misleading function from individuals to depts. (As a first stab: Maybe the rank of a department should be determined by the sum of the rankings of the individual PhD-supervising philosophers in the department, divided by the number of grad students in the department.)
    Justin T, surely you’re joking about the proposal to rank individuals in terms of their moral fiber–aren’t you?

  7. Perhaps “rank in terms of moral fiber” isn’t the right way to put it, but I do think there is something to be said for a ranking system that factors in things like how easy faculty members are to work with, how collegial a department is, etc. Surely that sort of information is relevant to a prospective grad student trying to make a decision about where to apply, attend, etc. I’d think that for at least some prospective grad students, a slightly less reputable department full of friendly, helpful philosophers is going to be a better fit than a “top” department full of a-holes that bicker all the time and aren’t really interested in helping their students improve. So “moral fiber,” probably not. But approachability and helpfulness of faculty, collegiality of department, those seem relevant. I don’t know how you measure that sort of thing, which is a problem, but it seems like a factor that clearly matters when one is choosing a grad program, and it might bear on things like attrition rate and time to completion, which are also relevant considerations for prospective grad students.

  8. Although I certainly agree with what you say here, I’ve never understood why philosophers want rankings at all. Surely, philosophers would be capable of forming their own opinions about the quality of various departments if they had easy access to relevant data — faculty lists, specialties, publication history, seminar topics, placement rates, financial support, and so on. In the 21st century, it’s easy to enable departments to submit this kind of data to a database and empower individuals to do what they like with it.

  9. I don’t know how feasible a moral fiber ranking is but Eli is quite right that prospective grad students have a great interest in the atmosphere of the dept and how mentoring a dept is. Could the Site Visit team perhaps generate data about these aspects of depts?

  10. Zackary,
    Maybe at the end of the day that is the way to go. But I think it needs to be recalled what it is like for an undergrad trying to decide where to go to grad school. I came out of a small liberal arts college and did not know what a TA was or what they did, what the good journals were, who was a playa, etc. I had mainly read Descartes, Plato, etc. And even tho I went to a good school, my advisors were out of date about the profession. How such a person would be able to make sense of the blizzard of data is not obvious to me. For me a lot hinges on what the best model we can come up with for rankings would look like. If it does not look that good in the end, I would agree with you about it being best to do without rankings.

  11. This feels kind of like debates about NCAA Division I rankings. Maybe we could use a computer ranking system that takes into account a variety of other rankings, weighted in certain ways that makes self-serving voted rankings less effective and provides some voice to a variety of outcomes data bits. (Or, better yet, have a tournament/playoff system.)

  12. Hi David —
    I agree. An advantage to the idea of having a database is that it could easily be used to make one’s own rankings. So if you wanted to create a ranking based on a specialty, placements rates, financial support, or a weighted combination of them, you could do it. I’d expect to see established people use it to post their own rankings of departments based on objective data. So if you were an undergraduate who wanted to attend grad school, you’d be able to find rankings — rankings that are better, more objective, and justifiable, with their methodology and underlying data fully exposed.

  13. David K.–Yes, a joke. Based on David S’s description, I had assumed the original suggestion of individual rankings was intended as a reductio, and so added “moral fiber” as something that, on one hand, really is relevant to students, and on the other hand, would obviously make the rankings even that much more contentious.
    But to engage your suggestion more seriously, I think that having a rankings system involves making a trade off: the information provided really is useful to students, especially those students without plugged-in undergrad advisors, but on the other hand there is something a little unseemly and maybe also distorting about ranking colleagues. I’m mostly content with the trade off that the present PGR makes with its departmental rankings. I suspect individual rankings would provide some relevant new information for students, but not enough to justify the fighting, hurt feelings, harms to the careers of unjustly under-ranked individuals, distortions to the profession, and so on, that would go with such rankings.

  14. So I am gravitating towards a database that collects information about depts., and is searchable by topic, but that does not itself produce a master ranking. Ranking on this or that matter, such as placement or placement in Ph.D. granting depts., could trivially be generated using such data. Users would be free to pick the data that they think most relevant to them. The database could include a discussion of the pros and cons of relying on this or that type of data. Depts would have to update the info on this database and likely pay a fee to help pay for the work that goes into it. The model would be compatible with adding a more reputational set of rankings as well. I hear on Facebook from Eva Feder Kittay that the APA is working on something like this, absent the reputational rankings, as we speak. Further, because of the great interest prospective grad students have in climate matters and mentoring, I think the Site Visit program best situated to produce data on these matters to be fed into the database. To my mind this has the happy result of providing additional incentive to invite the Site Visit team to one’s dept and to clean up matters that affect climate and mentoring.

  15. I tend to agree with Zackary’s view regarding providing information about departments rather than ordinal rankings.
    To address David S’s concern about prospective graduate students not understanding the information presented them. It seems to me that the solution might be similar to the solution that the PGR has adopted for informing students how to use rankings to select potential programs to apply for. That is, whatever website the database of information about programs is going to be posted could also post short blurbs about how the various bits of information presented are perhaps relevant to the decisions that incoming graduate students must make.
    I think that would be far more useful than relatively opaque ordinal rankings for someone trying to make a decision regarding which program would best serve their needs/interests.

  16. First, I agree with others that an online database of ‘objective’ measures is a good idea. I would have thought it would be relatively trivial to program a website where one could plug in a formula that assigned those measures weightings and thereby generated an overall ranking. It would be a further trivial task to allow website users to plug in their own formula, and provide a link to the rankings thereby generated, along with a description of the weightings they’d chosen. With that in place, different individuals or groups could produce rival formulae, and we could have on ongoing argument about which formula produced the most sensible overall ranking.
    Second, I’m uneasy about ranking individuals, primarily for the reason Justin most recently gives: it would generate a lot of hurt feelings and would have damaging self-reinforcing effects. On the former, it just seems heartless to have a system that declares publicly who the worst philosopher in the world is. Even if that person disagreed with the ranking system, that wouldn’t be a nice title to live with. On the latter, being ranked low on the list would lead to fewer invitations to speak, contribute to volumes, etc. and those things would in turn impact on one’s ranking. That some departments suffer from these kinds of effects is sad, for some individual to suffer from them seems worse. (Someone may press me on why that is so, and I may well struggle to articulate an answer.)

  17. Sorry that I’ve not read all the comments above. But David Sobel encouraged me to share something I posted on Facebook (see https://www.facebook.com/johnschwenkler/posts/10152783281409680) about why I think placement statistics are a poor guide to the quality of a graduate program. Here’s what I wrote:
    Lots of people seem to see placement statistics as a superior alternative to PGR-style ratings, as they are supposed to be a better indication of the quality of graduate training. I disagree. Here are some reasons why:
    1. Placement statistics don’t account for the differing abilities of students before they begin graduate study. E.g. a prestigious department that does a mediocre job of training but selects from a really excellent pool of applicants may have a superior rate of placement than a less prestigious department whose applicants are less great, but trains them really well.
    2. Placement statistics don’t account for the differences in “kinds” of jobs. E.g. one school might place its students mostly in research positions, another mostly in teaching positions, another mostly at religious institutions, another mostly at institutions in a certain geographic region, etc.
    3. Placement statistics reinforce the impression that the only good result of graduate study is an academic job. This is false. And a program that does a really good job of preparing its students for non-academic careers shouldn’t be penalized for this.
    In my eyes, the upshot of all this is that placement statistics are just too clumsy to account for relevant differences in the abilities, interests, desires, etc. of prospective students.
    To be clear, this is a problem with STATISTICS specifically, as opposed to detailed RECORDS of placement (as well as attrition, etc.), which I think departments should absolutely provide. And while of course it’s possible to tell prospective students to take quantitative measures of placement only with a grain of salt, just as with the PGR ratings we all know this is unlikely to happen — for any numbers that are given, programs whose placement is statistically “better” than some others will be regarded as superior, and while this may be true in certain cases there will be too many others where it is not.
    Much better just to provide detailed information on departments, including faculty interests, publications, recent seminars offered, recent student outcomes, etc., and let students identify which departments are most likely to be a fit for them.

  18. As I have posted elsewhere, I had no idea what the PGR was when I was applying for graduate programs from my small liberal arts school in the Appalachian mountains. I suspect that was because my advisors didn’t know about it either. But the APA provided a book of all the programs in the US with specialties, self-identified strengths, placement, etc. I found it very useful and sent out a bunch of applications to various programs — a few to each of what I judged to be top tier middle tier, and bottom tier schools. When I finally learned about the PGR, my rankings were roughly in alignment.

  19. A few ideas here:
    On the assumption that prospective grad students are the primary audience for the PGR, I’ve long found the PGR assumption that faculty reputation equates to how desirable it is to attend a graduate program obviously wrong. Too many other factors, both intrinsic (how pleasant the faculty are, financial aid) and extrinsic (job placement), are relevant. I’d thus be in favor of lots of information of various kinds being made available. But if we want to have a reputational survey at all, then (a) all grad programs should be evaluated, not just those administering the survey think are worthy of being evaluated, and (b) I don’t see any good reason for faculty at grad programs to be the only ones doing the evaluating. It seems to me that this practice invites a faculty-centered perspective on graduate studies that we should be trying to counteract.

  20. I’m quite sympathetic to John’s dislike of aggregate statistics for exactly the reasons he gives. But of course, there will certainly be a lot of disagreement about what sort of data is valuable; this is why I’d advocate making the tools available for people to slice and dice the information however they like (including simply reading the raw data without any statistics applied to it at all). If some people liked lots of crunchy stats, they could have them; if others like more detailed qualitative information, they could have it. If you even wanted to create a full-blown ranking with added reputational information, then you could. The idea is to let people decide for themselves instead of putting any particular person or group of people in charge of making these judgments.

  21. What about a panel of experts who make themselves available to prospective students? Instead of aggregating their opinions and presenting the results to the students, you could have a system where each student inputs into a program certain info about herself or himself — e.g. areas of interest, figures of interest both classic and contemporary, figures of uninterest both classic and contemporary, favourite papers or books (perhaps selected from a list of most-cited works or something)*, and gets matched up with, say, 5 of the experts based on the info of the same sort collected from them. Then students can e-mail these experts and ask about faculty quality, but also about faculty reputations.
    *Of course, some students might not know many contemporary figures. So maybe you could even have 20 or so papers by contemporary philosophers posted up on the database from contemporary philosophers, and both students and panel members could pick five of them that they were most interested in and be matched up partly on that basis.

  22. Thanks, David Sobel, for encouraging us to move the conversation out of Facebook and somewhere more openly accessible.
    If the assumptions you initially outlined are true (and I think that whether or not we “need” rankings, the future is one in which some will pursue them regardless of our needs), then I would assume an extremely democratic system of overall rankings would be differently bad, rather than an improvement over personality-driven endeavors. We’re not electing a representative, but seeking a well-devised system of organizational assessment — preferably on multiple dimensions. It is not the case that every philosopher is equally good at devising such systems.
    Worries as to whom will do future rankings may take care of themselves, if the suggestions to develop central sources of information are pursued. For example, I enjoy toying with databases that allow me to create multiple tables, like the Bureau of Labor Statistics and some Census Bureau sites. (I can imagine a parallel coming to pass for Philosophy programs: Who has most publications, and who has the worst record of his or her PhD students completing degrees or placing into academic jobs, would be fascinating tables.) Inevitably, if such rich information databases exist in future decades, then if students or faculty notice that a few schools have the most publications in Ethics, or the best placement record of minority-identifying students, they are going to say so. If one wants to be where X happens most, one is highly likely to seek out, and announce the results of a search, as to where X happens most.

  23. Here’s an excerpt from an FB post where I argue against rankings in philosophy:
    In my view there are at least two good reasons to think that the entire project of trying to sort philosophers, philosophy departments, philosophy specializations, and philosophy papers into one or more broadly linear rankings makes no sense (here I respond to Mitchell Aboulafia’s excellent call for debate on this topic).
    First, of all the disciplines, philosophy is arguably the most diverse in terms of topic, approach, and methodology: anything is potentially a topic of philosophical investigation; there are multiple canons and associated traditions or approaches to any given topic; and even among those taking a common approach to the same topic, there is often basic agreement about methodology. Such diversity is to be applauded and, with very few exceptions, encouraged—we are too far from the end of philosophical inquiry to be dogmatic (see my paper, ‘Three Dogmas of Metaphysical Methodology’). That we can or should be trying to pull broadly linear rankings out of this wonderfully blooming buzz is, I think, ridiculous: one might as well try to rank flowers, cuisines, or cultures.
    Second, ranking systems encourage premature dogmatism, whereby the favoured topics, approaches and methodology of those working at the top-ranked institutions (either absolutely, or relative to a given speciality) takes on the sheen of ‘to be accepted’ by others. Hence it is that so many talented philosophers of the past generation have spent their valuable time working within frameworks whose foundational presuppositions are clearly and immediately questionable, while more plausible and illuminating approaches to the topics at issue are neglected (again, see my paper). What goes for premature dogmatism concerning which methodology is correct also goes for premature dogmatism concerning which topics are worth working on. And the people most likely to think, falsely, that there is a reliable correlation between the “top” departments/philosophers and the “best” topics and methodology are… you guessed it, the very people who are invoked as most benefiting from the PGR rankings: graduate students.
    What is needed is not a changing of the guard, but a complete rejection of ranking systems in philosophy, and an associated revolution in how we go about doing philosophy. I truly appreciate the remarkable and sustained effort Leiter has put into the PGR, in service of what he and many others have seen as a service to the profession. But, like late-stage capitalism, the rankings have become oppressive. We need to grab the means of production, and stop looking to others to tell us who and what is of philosophical value. The world is our philosophical oyster—let’s be creative! Let’s work on what interests us, and with who interests us, and articulate what we are doing in terms that don’t just iterate the latest citation circle. Let’s read elders and others that no one is discussing, read philosophy in other areas and traditions, study fields that might have who-knows-what impact on our work. Let’s make our case, and make up our minds, on our own terms.
    But what about the graduate students, poor little lambs—how will they figure out which grad school to attend? I suggest that the APA or other institution manage an informational website with lists of/links to every single philosophy department in every country. (No doubt others may have suggested this or other strategies.) Let the institution get information from each department about what faculty members (and perhaps also graduate students) work on, and provide links to faculty (grad student) websites and PhilPapers profiles, and allow the overall lists to be restricted to those with one or more faculty members working in a given area or topic. The grad students will have to do more work, but they’ll manage, and it will be good for them. More generally, getting rid of the rankings will be good for both philosophy and philosophers. So let’s do it.

  24. I like the idea of a database with lots of information in different categories (placement by area, by type of institution, attrition rates, faculty specialty and publication records, climate); including blurbs as to why these categories of information are important for prospective students’ decisions. I would just like to add one reason to also incorporate the ability to produce summary style rankings on given categories of information, in a transparent and fairly trivial way based on the data. It’s that some prospective students will have less time to trawl through the data and try to make sense of it than others. (Think jobs, family, single-parenting, illness.) We don’t want to make it harder for those people to apply to grad school than it already is. So having some sort of a way to relatively quickly look at how departments stack up on e.g. placement by area and type of job would be useful. (This could include the option of ranking depts based on how well their graduates do on the non-academic job market, if the relevant data were available.)
    For some categories of information — e.g. faculty publication record — it will be more controversial how to generate the relevant summary rankings. One option is just to post a link to (perhaps various) rankings of journals, and not try to produce a summary ranking based on the publication data at all. One could also explicitly note, in a prominently available blurb on the relevant website, the fact that how to evaluate overall faculty quality (in a given area) by means of the department’s faculty publication record (in that area) is somewhat contentious.

  25. I’m a bit late, but I would like to add one potential problem with using placement rates — that it could create perverse incentives for departments regarding when/how to encourage applications. I know that several advisors think that its a good idea to go on market with a smaller selection of jobs that would be optimal before one is out of funding (if possible) and then go back on (if necessary) when one needs a job on pain of losing funding. Placement rates (especially given the small denominators resulting from few applicants from philosophy departments as opposed to medical schools, law schools, etc) could be vastly affected by practices like these as one is less likely to place under the early applicant option than under an option where one just applies to everything that could possibly be justified. In fact, it may encourage keeping students as long as possible before having them apply to as many places as possible rather than other alternatives.
    Further, if the anecdotal reports about PGR rankings affecting departmental pressures from Colleges/Universities’ administrations are accurate, then I imagine such pressures would might be more intense if placement rates are an important part of any ranking methodology (even on that can be adjusted by user preferences) — given how easy it would be to affect when people can go on market as opposed to doing things that actually benefit departments as a whole (e.g., making a new hire).
    It may be that the value of such a metric trumps the pro tanto reasons against it, but I thought this reason worth considering regardless.

Leave a Reply

Your email address will not be published. Required fields are marked *