Rankings are not definitive. One only has to look at two of the major national rankings to see how dramatically they differ– U.S. News, with its moderate bias toward elite private schools and Forbes, whose formerly quirky criteria resulted in only two public universities (other than service academies) being ranked among the top 50 in the nation as recently as 2011.
(Update December 6, 2012: U.S. News now appears to be leaning even more toward private colleges. Is It Time for Public Universities to Say Goodbye to U.S. News?)
(Update June 24, 2013: Business Insider’s Fifty Best Colleges According to Readers.)
(Update July 25, 2013: Forbes Rankings 2013: A Mild Shift to Publics.)
Going one step further, to the world rankings of universities, we often find the puzzling result of an American university being ranked far higher in the entire world than it is in the United States alone.
The basic reason for these seeming contradictions is clear and widely recognized: each ranking system uses different criteria, which yield differing results. For example, the world rankings emphasize academic research, and the national rankings focus on graduation rates, funding, class size, or student satisfaction.
What does this mean for the consumer? Should we focus on the alleged prestige of the ranking authority, or accept at face value the lists that are kindest to our favorite colleges? Or should we decide that the rankings are so slanted and disparate that they should all be ignored?
There is a way to use the multiplicity of perspectives to best advantage. But that way requires the reader to devote some time to understanding the different criteria and the weight attached to each. A few minutes can tell us a lot about a particular ranking and enable us to decide as individuals which aspects of that ranking can be useful to us.
For example, the Forbes rankings emphasize “outputs,” including student satisfaction, graduation rates, post-graduate success, and average student debt. Post-graduate success counts for a whopping 32.5 percent of the total. What is post-graduate success? Formerly, only half of it was in the form of pay; the other half was based on membership in Who’s Who?; on the number of alumni on the Forbes list of prominent corporate officers; and on the number of graduates who are top federal or state officials (added in 2012).
But for 2013, Forbes has moved to a significantly more balanced methodology, which includes points for the attainment of prestigious faculty and student awards and for high freshman retention.
The U.S. News rankings, on the other hand, emphasize academic reputation and departmental strength as a separate metric, along with an over-emphasis on the financial resources of each university. U.S. News counts both the amount of financial resources a school has and then gives points for the effects the resources have on, say, class size or the ratio of students to faculty. The latter are valid outputs, but adding the amount of financial resources as a separate metric magnifies the impact of wealth. Also important in the rankings are selectivity, class size, and retention and graduation rates. The emphasis on selection stats (selection percentage, test scores, gpa’s) is one factor that drives many colleges to recruit as many students as they can to show how much in demand they are. (We do not use selection stats as a metric.)
Like the Forbes rankings, but for different reasons, the U.S. News rankings tend to favor private universities, but not so much as the Forbes rankings have until 2013. The U.S. News rankings do a good job of answering many important questions that consumers have about colleges–but beware of the bias toward private schools.
The various world university rankings are reasonably good at pointing out which American universities have prominent research profiles in the world, and this is useful for consumers who are considering graduate schools or who want to engage in important research as undergraduates. On the other hand, these rankings do not provide a full portrait of more student-centered results, such as class size and graduation rates.
Regarding our own rating of honors colleges and programs in 2014, we did not use faculty reputation as a metric unto itself or selection stats, although faculty influence is reflected in other categories, as we have noted above. (We did list the best academic departments in each school; but we do not count them as a metric.) What we emphasized most was honors curriculum, especially its reach across four years; honors course range and type; actual class sizes; and actual graduation rates.
Our second edition does not feature numerical rankings at all. This is because we have come to believe that the differences among honors programs, while they can be assessed, cannot be reported in the minute increments necessary for numerical rankings. We will used a 5-mortarboard system instead. The second edition actually has far more comparative information than the first edition, including more individualized data and more detailed narratives for each of the 50 programs.
Finally, and most important in the case of honors programs, please visit the program. If you use our rankings at all, please use them as the basis for questions rather than as a comprehensive answer. As with any ranking, it is only useful as it applies to your individual situation.