MORE INSOLENCE. Over at the Washington Monthly, Paul Glastris responds to my criticism of the Monthly's college rankings at the behest of Kevin Drum, who agrees with me. My criticism was that the "research" component of the rankings is not scaled by size and so penalizes small schools for being small even if they produce a great deal of research relative to their size. Glastris defends the practice on both theoretical and practical grounds. First, the theoretical:
There's no reason to suppose that large schools would have a natural advantage over small schools in, say, recruiting and graduating low-income students. But there are reasons to believe that large schools have several legs up when it comes to doing cutting-edge research (decoding genes, exploring subatomic particles) and producing graduate students who are familiar with that research. Sure, such work can be done in small schools. It's also possible to make great films, design innovative software, or publish award-winning glossy magazines in small towns and provincial cities. But it doesn't happen nearly as often as it does in LA, Seattle, or New York, in large part because these large metro areas can support the thick labor markets and webs of interconnected companies that are required to do this kind of collaborative work easily.
This doesn't hold water at all. Glastris may very well be right that large schools have network effects that promote research, but, if this is the case, large schools will do well in the rankings even if size is corrected for. To use his example, if you divided the number of films and magazines produced in LA by the number of people who live there and did the same for Des Moines, LA would still come out ahead. In fact, the only way to know for sure if the large schools are in fact inherently better at producing research is to correct for size.
His second reason is practical:
Say you wanted to get rid of the bias towards large schools. To do that, you'd have to divide each institution's total PhD and research output by some other factor -- say, total faculty, or total number of faculty members teaching graduate students or doing research. The problem is that schools don't report faculty and researcher numbers in consistent ways. Some only count professors, not adjunct lecturers or researchers in university-based institutes, who often do much of the graduate-level teaching. Others count researchers -- say, at affiliated hospitals -- who never set foot in the classroom. Judging the research-and-PhD component by the reported number of faculty would give schools with the narrowest definition of faculty an edge.
I feel for him, but, if we accept that it would be better to adjust for size, this is not a sufficient excuse. No measure at all is better than one that vastly distorts reality while purporting to represent it. Let's say you didn't know the population of American cities. Would you try to measure their creativity by simply looking at the number of movies each produces? You could, but you wouldn't actually be producing any meaningful knowledge about where an aspiring filmmaker should move. Nor am I convinced that the methodological problem is insurmountable. What if they looked at the total number of graduate students (who do most research anyway)? Or the number of tenured faculty?
I'd be interested in his response and, for that matter, his explanation of why the rankings include the Peace Corps but not teach for America or Americorps, and what he says to people who point out that the inclusion of the ROTC numbers discriminates against schools with large gay populations. I'm pursuing this because I believe in the concept behind the rankings, and I think that, if they became more widely followed, they might encourage schools to do a variety of positive things. A world where administrators get bonuses for raising their Washington Monthly rankings as well as their U.S. News rankings would be a better one. But this can only happen if the ratings are credible and, right now, they aren't.
--Sam Boyd