The Kiplinger Best Value College Index methodology emphasizes a “quality” side in relation to the “cost” side of a university. The quality side includes selectivity, retention, and four-year grad rates, while the cost side takes tuition, fees, merit aid, need-based aid, and post-graduation debt into account.
For the 16th straight year, UNC Chapel Hill leads as the best public value for both in-state and out-of-state (OOS) applicants.
The Southeast and Mid-Atlantic account for 10 of the top 25 best public value schools. West coast universities in the UC system along with the University of Washington account for another half dozen in the top 25.
In the middle, so to speak, are traditionally strong publics including Michigan, UW Madison, Illinois, UT Austin, Minnesota, and Ohio State.
Acceptance rates vary widely among the top value schools, from a low of 15 and 17 percent at UC Berkeley and UCLA respectively, to a high of 66 percent at Illinois.
Other publics with relative low acceptance rates include Michigan (26 percent); Cal Poly (31 percent); Georgia Tech (32 percent); UC Santa Barbara (33 percent); UC San Diego (34 percent); and UC Irvine and UT Austin (39 percent).
Below are the top 25 in-state public values, with the OOS ranking and Acceptance Rate listed as well.
In our latest book of honors program ratings, we listed the honors programs and colleges that received an overall five “mortarboard” rating. One component of the rating model used in order to determine the leading programs is prestigious scholarships–the number of Rhodes, Marshall, Truman, Goldwater, etc., awards earned by students from each university as a whole.
In most cases, honors programs at these universities contribute most of the winners of these awards, but not in all cases. So while the prestigious scholarship component is worth including, we do not want it to override the 12 other rating components used in the ratings. These components are “honors only” because they do not include awards earned by non-honors students of the university as a whole.
Therefore, we decided to do a separate rating, one that is not included in the new book, INSIDE HONORS. The new rating uses only the 12 components listed below. Farther down, you can see whether the prestigious scholarship component had a major impact on the overall ratings of top programs.
Those 12 additional components are…
Number of Honors Classes
Number of Honors Classes in 15 Key Disciplines
Extent of Honors Enrollment
Average Class Size, Honors-only Sections
Overall Average Class Size, All Sections
Honors Graduation Rate-Raw
Honors Graduation Rate-Adjusted for Test Scores
Student to Staff Ratio
Type and Extent of Priority Registration
Honors Residence Halls, Amenities
Honors Residence Halls, Availability
Below is a comparison of the honors programs that received a five mortarboard OVERALL RATING (left side) and those that receive the same rating for HONORS COMPONENTS ONLY (right side), all listed ALPHABETICALLY.
OVERALL FIVE MORTARBOARDS
HONORS ONLY COMPONENTS, FIVE MORTARBOARDS
New Jersey Inst Tech
New Jersey Inst Tech
It is notable that the overlap is almost identical: Arizona State is not on the second list, while Temple is not on the OVERALL list but is on the HONORS COMPONENTS list.
We must add that Temple barely missed a five mortarboard overall rating, while ASU was similarly close to making the honors components list.
There are some interesting features, and the rankings are certainly worth a look.
The rankings combine national universities and liberal arts colleges into one group, and in this way resemble the Forbes rankings. And, also like the Forbes rankings, the salaries earned by graduates also count as a metric, 12% of the total in the WSJ/THE rankings.
Farther down, we will list the top 100 colleges in the rankings. Only 20 of the top 100 schools are public; 31 are liberal arts colleges; and the remaining 49 are elite private universities. This is not much of a surprise, given that financial resources are a major ranking category.
Before listing the top 100, we will list another group of schools that have the best combined scores in what we consider to be the two most important umbrella categories in the rankings, accounting for 60% of the total: “Engagement” and “Output.”
Engagement (20% of total, as broken out below):
A. Student engagement: 7%. This metric is generated from the average scores per College from four questions on the student survey:
To what extent does the teaching at your university or college support CRITICAL THINKING?
To what extent did the classes you took in your college or university so far CHALLENGE YOU?
To what extent does the teaching at your university or college support REFLECTION UPON, OR MAKING CONNECTIONS AMONG, things you have learned?
To what extent does the teaching at your university or college support APPLYING YOUR LEARNING to the real world?
B. Student recommendation: 6%. This metric is generated from the average score per College from the following question on the student survey:
If a friend or family member were considering going to university, based on your experience, how likely or unlikely are you to RECOMMEND your college or university to them?
C. Interactions with teachers and faculty: 4%. This metric is generated from the average scores per College from two questions on the student survey:
To what extent do you have the opportunity to INTERACT WITH THE FACULTY and teachers at your college or university as part of your learning experience?
To what extent does your college or university provide opportunities for COLLABORATIVE LEARNING?
D. Number of accredited programs (by CIP code): 3%. This metric is IPEDS standardized number of Bachelor’s degree programs offered.
Output (40% of the total, as broken out below):
A. Graduation rate: 11%. This metric is 150% of the graduation rate status as of 31 August 2014 for the cohort of full-time, first-time degree/certificate-seeking undergraduates, Bachelor’s or equivalent sub-cohort.
B. Graduate salary: 12%. This metric estimates the outcome of median earnings of students working and not enrolled 10 years after entry.
C. Loan default/repayment rates: 7%. This metric estimates the outcome of the 3-year repayment rate from College Scorecard data. The value added component is the difference between actual and predicted (based on underlying student and College characteristics) outcomes.
D. Reputation: 10%. This metric is the number of votes obtained from the reputation survey, and is calculated as the number of US teaching votes from the reputation survey and the number of US-only teaching votes from country section of the reputation survey.
The two remaining umbrella categories measure FinancialResources, including the amount spent per student; and the Environment, including the diversity of enrolled students (or faculty) across various ethnic groups. You can find a summary of the methodology here.
Here are the 23 colleges that scored at least 17.0 (out of 20) in Engagementand at least 30.0 (out of 40.0) in Output, listed in order of their overall place in the WSJ/TimesHigherEd rankings:
Editor’s note: This post has now been updated, effective September 9, 2019, to include new U.S. News rankings for 2020. Listed below are the yearly rankings and overall average rankings of 123 national universities that were included in the first tier of the U.S. News Best Colleges from 2013 through 2020. There are 61 public and 62 private universities. The list below not only shows the average rankings over this eight-year period but also lists the number of places lost or gained by each university.
U.S. News has changed its methodology, and there are some significant changes, especially after the top 30-35 places in the rankings. Major gains for Florida, Florida State, Georgia, Georgia Tech, most UC campuses.
Beginning in the 2019 edition, U.S. News “factored a school’s success at promoting social mobility by graduating students who received federal Pell Grants (those typically coming from households whose family incomes are less than $50,000 annually, though most Pell Grant money goes to students with a total family income below $20,000).”
This has shaken up the rankings quite a bit, and the trend will continue. Previously, a school’s wealth drove the rankings, without regard to the number of low-income students enrolled. Now school wealth can have a different impact by enabling institutions with more resources to “afford” the enrollment of more students who cannot pay full tuition. Some universities that lack big endowments formerly raised their rankings by enrolling students with higher test scores, even if merit aid was necessary. Now that model might not be as effective, since many of those students did not receive Pell grants.
While we appreciate the massive amount of data that the U.S. News rankings provide on class sizes, grad rates, retention rates, and even selectivity, on the whole the rankings fail to evaluate efficiency (the number of students who receive a high-quality education at a relatively low cost) and should not use selectivity and wealth as metrics.
Here are the historical rankings, the average of each school across eight years, and the increase or decline of each school from 2013 through 2020. The universities are listed in order of their average ranking across the years.
The 2016 edition will have a new name– Inside Honors: Ratings and Reviews of 60 Public University Honors Programs. It is in the final proofing stage now. The goal is to publish in late September. Each edition includes a somewhat different group of honors colleges and programs, so there will be changes, even among the 40 or so programs that are reviewed in each edition.
As I have noted in previous updates, the book will take an almost microscopic view of 50 of these programs and also provide more general summary reviews of 10 additional programs. I can say now that there will be a few more programs that will receive the highest overall rating of five “mortarboards” than there were in 2014. (The final list of programs we are rating and reviewing for 2016 is below.)
The rating system makes it possible for any honors college or program, whether a part of a public “elite” or not, to earn the highest rating. Similarly, the ratings allow all types of honors programs to earn the highest rating. Those receiving five mortarboards will include core-type programs with fewer than 1,000 students and large honors programs with thousands of students. And absent any intentional preference for geographical diversity, the list does in fact include programs from north, south, east, and west.
By microscopic, I mean that the rating categories have increased from 9 to 14, and so has the depth of statistical analysis. The categories are, first, the overall honors rating; curriculum requirements; the number of honors classes offered; the number of honors classes in “key” disciplines; the extent of honors participation by all members in good standing; honors-only class sizes; overall class size averages, including mixed and contract sections; honors grad rates, adjusted for admissions test scores; ratio of students to honors staff; type of priority registration; honors residence halls, amenities; honors residence halls, availability; and the record of achieving prestigious scholarships (Rhodes, Marshall, Goldwater, etc.).
Sometimes readers (and critics) ask: Why so few programs? Doesn’t U.S. News report on hundreds of colleges?
The answer is: Honors colleges and programs are complicated. Each one of the 50 rated reviews in the new edition with by 2,500-3,000 words in length, or 7-8 pages. That’s almost 400 pages, not including introductory sections. The rest of the answer is: We are not U.S. News. With myself, one assistant editor, a contract statistician, and an outsourced production firm, our ability to add programs is very limited.
The 2016 profiles are full of numbers, ratios, and averages, more than in 2014 certainly–and too many, I believe, for readers who would prefer more narrative summary and description. So, yes, it is a wonkish book, even to a greater extent than this website tends to be. But then, they are honors programs after all.
Arizona State Honors
Central Florida Honors
Colorado State Honors
CUNY Macaulay Honors
Georgia State Honors
New Jersey Inst of Tech
New Mexico Honors
North Carolina Honors
Oklahoma State Honors
Oregon State Honors
Penn State Honors
South Carolina Honors
South Dakota Honors
Texas A&M Honors
Texas Tech Honors
UC Irvine Honors
University of Utah Honors
UT Austin Honors
Virginia Commonwealth Honors
Virginia Tech Honors
Washington State Honors
Florida State Honors
New Hampshire Honors
Ohio Univ Honors
Western Michigan Honors
As a former Navy Seal and admiral in command of all U.S. Special Operations forces, UT System Chancellor Bill McRaven spent three decades serving and leading the most elite military forces in the world. He has made it clear that he now wants the state flagship to join the best of the best among the nation’s public universities.
But after seeing the system flagship turn away thousands of the state’s elite students because they did not make the top 10 percent (actually 7 percent at UT Austin this year) in the graduating classes of the state’s most competitive high schools, the chancellor sees the automatic admission rule as a major obstacle to keeping the brightest students in Texas–and at UT Austin.
The motives of the Top 10 proponents are certainly worthy–to increase the enrollment of high-achieving minority students at UT Austin. But what makes the rule (sort of) work is that it is predicated on the fact that many of the state’s high schools remain almost entirely segregated. Sometimes this is because an entire region is heavily Latino (the Rio Grande Valley); but elsewhere the segregation in urban centers is based on race and income.
Many of these high schools are among the least competitive in the state. Graduating in the top 7 percent of a high school that offers no AP or honors sections and that has low mean test scores is far different from reaching the top 7 percent of a graduating class of 800 students that has 70 National Merit Scholars.
What can happen to suburban students at very competitive schools is that an unweighted high school GPA of 3.9 (high school rank top 11 percent) and an SAT score of 1440 might not make the cut at UT Austin. Three-fourths of the school’s admits are from the top 7 percent pool; the other 25 percent of admits face a pool that is as competitive as many of the nation’s most selective private colleges.
And, McRaven would say, too many of these students are going out of state, where it costs them more and where they might remain rather than return to Texas. Moreover, the chancellor believes the rule is part of the reason that UT Austin, despite having a stellar faculty, is not rated as highly as it should be among the nation’s public universities.
“Candidly, I think we need to take a hard look at some of the ways that we address higher education, particularly at our flagship program. Your flagship, your number one university in the state of Texas is ranked 52nd on the U.S. News & World Report. To me that’s unacceptable. A lot of things drive that. The 10 percent rule drives that,” he told higher ed leaders.
While he did not specify exactly how the rule contributes to lower rankings, the graduation rate metric used by U.S. News might be lower for UT Austin in part because of the relatively lower standards in many poor and mostly segregated high schools. (It is possible that the chancellor also sees the large size of UT Austin as another issue.)
If the chancellor can find a way to maintain or improve minority enrollment and do away with the Top 10 rule, he might prevail. If the U.S. Supreme Court does not scrap the university’s current holistic admissions policy for students outside the top 7 percent, he might have a better chance; otherwise, his task will be as difficult as many he faced as a military leader.
“[The Top 10 Rule] is a very very sensitive topic,” State Rep. Robert Alonzo told McRaven. “It is a topic that we have discussed at length from all different aspects, and I would hope that we have put it to rest for a while.”
McRavenwas undeterred. “I am a new chancellor, so I am going to take that opportunity to re-open that look again,” he said. “Because my charge is to make us the very best, and I think there are some obstacles to doing that.”
Alonzo replied: “Well, I accept the challenge, sir.”
Stay tuned, for this could be a big battle indeed.
The annual composite MBA rankings compiled by John A. Byrne at Poets & Quants combines rankings from the “five most influential rankings and weighs each of them by the soundness of their methodologies” in order to yield “a more credible list of the best MBA programs.”
We like Poets & Quants and Byrne’s rankings and try to write about them each year. The rankings from which he combines the comprehensive list are those from U.S. News, Forbes, Bloomberg, the Financial Times, and the Economist.
Here are the public MBA programs listed in the top 50 for 2015, and their composite rank:
Is it actually worth it, in terms of quality classroom learning, to land a place at an elite college or university? This is a question that many families with highly-talented students ask themselves. If their answer is yes, the result is likely to be a concerted, frenzied effort to mold the students in a way that gives them at least a modest chance of admission to such schools. (Of course, for better or worse, the question is often framed as “Is it worth it, in terms of career success, to land a place…”).
Regarding the differences in the quality of classes among all levels of institutions, new research provides some insights. The researchers lean toward minimizing the relationship between academic prestige and quality of instruction–but it appears that some of their own research suggests just the opposite.
In an article titled Are Elite College Courses Better?, Doug Lederman, editor and co-founder of Inside Higher Ed, provides an excellent, mostly neutral summary of the recent research that suggests course quality in a relatively broad range of institutions does not vary as much as the prestige of a given school might suggest.
“Researchers at Teachers College of Columbia University and at Yeshiva University… believe they are developing a legitimate way to compare the educational quality of courses across institutions,” Lederman writes, “and their initial analysis, they say, ‘raises questions about the value of higher-prestige institutions in terms of their teaching quality.'”
The researchers suggest that the drive to enhance prestige based on rankings and selectivity have led to “signaling”–branding, perceptions–that are increasingly divorced from the actual quality of classroom instruction. The laudable aim of the researchers is to turn the conversation away from college rankings and the metrics that drive them, and toward measurements of effective, challenging instruction.
Trained faculty observers visited nine colleges and 600 classes. Three of the nine had high prestige; two had minimum prestige; and four had low prestige. The schools were both public and private, with differing research and teaching emphases. We should note that there was no list of which schools were in each category, so we do not know exactly how the researchers defined “elite.” It appears likely, however, that many leading public research universities would be considered elite.
“Teaching quality was defined as instruction that displayed the instructor’s subject matter knowledge, drew out students’ prior knowledge and prodded students to wrestle with new ideas, while academic rigor was judged on the ‘cognitive complexity’ and the ‘level of standards and expectations’ of the course work,” Lederman writes.
“But they found that on only one of the five measures, cognitive complexity of the course work, did the elite colleges in the study outperform the non-elite institutions.”
First, we note that highly-qualified honors students at almost all colleges, including many less prestigious public universities, are far more likely to encounter more “cognitive complexity” in their honors courses. Whether this results from having more depth or breadth in actual assignments, from taking harder courses early on, or from engaging in more challenging interactions with similarly smart students and the best faculty, the learning experience in honors embraces complexity.
We also have to agree with one of the longest and most thoughtful comments posted on Lederman’s article, by one “catorenasci”:
“Well, is [more cognitive complexity] a surprise to anyone? After all…on average the students at elite colleges and universities (private or public) have demonstrated higher cognitive ability than the students at less prestigious colleges and universities. Which means that the faculty can teach at a level of greater cognitive complexity without losing (many) students.”
The full comment from “catorenasci” also seems to be on the mark when it comes to improved instruction in all other measured areas on the part of colleges with less prestige, regardless of honors affiliation.
“As for the level of ‘teaching quality’ based on faculty knowledge, given the job market today, it should hardly be surprising that it has equaled out since there are many top quality candidates for even less prestigious positions and overall, I would suspect that the ‘quality’ of the PhD’s of faculty at less elite schools is much closer to that of elite schools than it was during the ’50s and ’60s when higher education was expanding rapidly and jobs were plentiful.
“The transformational aspect should not be surprising either: assuming faculty are competent and dedicated, with less able students they will work harder to draw out what they know and build on it. And, it will be more likely that students will experience significant growth as the faculty do this.”
The annual Times Higher Education World University Rankings have had the strongest presence in the ranking “world” since 2004, but here’s one vote for the U.S. News Best Global Universities rankings being better even though they have been around only two years. Both are useful because they measure the prestige and research impact of hundreds of universities around the world at a time when there is much more international cooperation–and competition–among institutions.
It is rare for us to applaud the U.S. News rankings because there are many serious issues with the annual “Best Colleges” publication. It over-emphasizes the financial resources of colleges and their selectivity, to the detriment of most public universities.
But when it comes to world rankings, U.S. News drops the focus on financial metrics in favor of academic reputation and research metrics, including the use of regional reputation surveys that help to offset the eurocentric bias of the Times Higher Ed rankings.
For example, the Times Higher Ed rankings list 42 European universities among the top 100 in the world, while U.S. News lists 31. The main reason is probably that the Times rankings do include financial metrics and do not factor in the additional regional reputation data.
Below is a table showing the U.S. public universities ranked among the top 100 in the world by U.S. News alongside the rankings of the same universities by Times Higher Ed. An additional column shows the average ranking of each school when both ranking systems are used. The average ranking of leading U.S. public universities by U.S. News is 44 out of 100; the average Times Higher Ed ranking of the same schools is 82.
Editor’s note: This post updated on October 1, 2016, after release of Times Higher Ed Rankings for 2016.
It is likely that Philip G. Altbach, a research professor and the founding director of the Center for International Higher Education at Boston College, has the sharpest eye of anyone in America when it comes to seeing how well U.S. universities compare with rapidly improving institutions throughout the world. What he sees is not good.
U.S. public universities are losing ground to foreign institutions, most notably in Europe and Scandinavia.
(Below the following text are three tables. The first compares the Times Higher Ed world rankings of U.S. universities and those throughout the world for the years 2011 and 2016. The second shows the decline in rankings for U.S. public universities for the same years. The third and final table shows the rise in rankings for U.S. private universities.
Altbach cites the work of colleague Jamil Salmi, who found that there are at least 36 “excellence initiatives” around the world “that have pumped billions of dollars into the top universities in these countries — with resulting improvements in quality, research productivity, and emerging improvements in the rankings of these universities. Even in cash-strapped Russia, the ‘5-100’ initiative is providing $70 million into each of 15 selected universities to help them improve and compete globally. [Emphasis added.]
“At the same time, American higher education is significantly damaging its top universities through continuous budget cuts by state governments. One might call this an American “unExcellence initiative” as the world’s leading higher education systematically damages its top research universities. Current developments are bad enough in a national context, but in a globalized world, policies in one country will inevitably have implications elsewhere. Thus, American disinvestment, coming at the same time as significant investment elsewhere, will magnify the decline of a great higher education system.”
One reason: All the bad publicity about cuts in funding, along with high-profile political grandstanding that has received far too much attention throughout the world academic community. For example, UT Austin endured years of highly publicized attacks by former Governor Perry during his second term, and UW Madison has been hurt by similar actions on the part of Governor Scott Walker.
Unless state legislatures move toward excellence and restore pre-Great Recession funding levels, there will be “a revolution in global higher education and create space for others at the top of the rankings. It would also be extraordinarily damaging for American higher education and for America’s competitiveness in the world.”
The average ranking for the 23 U.S. publics among the top 100 in the world in 2011 was 39th, while in 2016 it was 49th–and only 15 publics were among the top 100. Meanwhile, leading U.S. private universities have seen both their average reputation rankings and overall rankings rise since 2011.
To illustrate Professor Altbach’s point, we have generated the tables below.
This table compares the Times Higher Ed world rankings of U.S. universities and those throughout the world for the years 2011 and 2016.
Top 100 Overall
The following table shows the decline in rankings for U.S. public universities for the years 2011 and 2016. UC Davis is the only school to rise, while most dropped significantly.
Times Higher Ed Rankings
UC San Diego
UC Santa Barbara
This last table shows the rise in rankings for leading U.S. private universities for the years 2011 and 2016.