For a while now we have written about alternative ways to view the annual U.S. News college rankings. (Please see An Alternative List of 2013 U.S. News College Rankings for an example.) Our view is that these rankings have placed too much emphasis on the financial resources and selectivity of institutions, often to the detriment of public universities. So far, the negative impact of that over-emphasis has been significant but not profound.
But for the last two years, the magazine is upping the ante–and lowering the “value” of public universities–by assigning over-performance or under-performance rankings based on a comparison of a given school’s undergraduate academic reputation with the magazine’s ranking. If the U.S. News rank is better than the reputation rank, then the school has over-performed relative to its reputation. If the magazine rank is worse than the reputation rank, then the school is under-performing.
The magazine’s resident number-cruncher, Robert Morse, is clear about the new analysis and its impact on public research universities:
“Many of the over-performers are relatively small research universities that grant fewer doctorates and conduct less research than others schools in their category. All the under-performers are large public universities—in some cases the top ‘flagship’ public in their state—whose academic reputation rank exceeds the performance in the academic indicators.” [Emphasis added.]
So, anyway, that’s the shot across the bow from U.S. News. Now for some facts and explanation.
U.S. News ranks universities based on “inputs” and “outputs.” The former include test scores, high school gpa, and multiple categories related to financial resources; the latter include grad rates, student to faculty ratios, and class size. The problem with the rankings is largely due to an over-emphasis on inputs.
For example, almost all leading private universities have strong financial resources, and this allows them to hire more faculty and achieve better student to faculty ratios, which in turn results in lower class sizes. So wealthy universities get credit for both their wealth (an input) and for what the wealth produces (an output). This compounds the impact of financial resources.
What works well for wealthy private universities has a compound negative effect on public universities. They are penalized for having less money, and then penalized again for having, say, too many large classes. A better system would consider only the output—the class sizes. Likewise, allocating points for high selectivity (an input) and high graduation rates (a directly related output) magnifies the mostly legitimate consideration of grad rates in favor of the most selective institutions. Yes, Harvard has a high graduation rate; but if you took students from the University of Michigan, or UVA, who had SAT scores of 2200 and higher, they would have a comparable graduation rate.
If you say, well, more money always wins out, please go to our link above. There we write that if you strip away the alumni giving, the impact of endowment, and other financial metrics and focus only on the essentials of academic reputation, graduation rates, and small classes, the publics do better overall than they do when the financial metrics and their magnifying impact are included.
There are 26 public universities among the top 50 national universities with the best academic reputations. Of these 26 public institutions, only 3 are ranked higher than their reputations, leaving 23 of the nation’s best public universities with the stigma of “under-performance.” Does this make sense to you—that academics are that wrong about their peers, while U.S. News is absolutely right? Or could the real problem be the methodology of U.S. News?