Here Are the Public Universities That Award the Most Non-need-based Aid

A report by the New America Foundation, The Out of State Student Arms Race, is the subject of another post on this site, How Much Should Public Universities Spend on Merit Aid? Although we have some disagreements with the New America report, it contains interesting arguments against the excessive use of non-need-based aid by public universities along with a list of those universities that provide the highest percentages of non-need-based aid to incoming freshmen.

The report would find full agreement from this quarter if it had been produced at a time past, when public universities received most of their funding from state appropriations and could maintain lower tuition rates for all. Now, unfortunately, many public institutions are forced to use merit aid more “strategically,” sometimes as part of the recruitment of out-of-state students and the greater revenue they bring, even after merit funding. To the extent that this use of merit aid works to deny access to merit-worthy, low-income applicants in-state, we do agree with the New America Foundation.

(Please note that separate posts discuss National Merit Scholarship aid, by institution. This post address the availability of all types of merit aid.)

In any event, the list below should be helpful to some parents with FAFSA income levels that are relatively high but that may still be stretched to the limit without non-need-based aid. We are not listing all the public universities on the list, but most of the larger ones are listed. After the university name, we will list the percentage of freshmen receiving non-need-based aid, followed by the average dollar amount of that aid per student. Most of the data is from 2013-2014. Schools where at least 20%  of freshmen receive at least $4,000 in average merit aid are listed in bold.

Public universities below with the highest average per capita merit aid are UT Dallas ($13,766); Alabama ($11,919); Colorado ($9,497); Vermont ($9,283); Arizona ($8,137);  Alabama Birmingham ($8,020); and New Hampshire ($8,020). Please note that some schools may sponsor very high numbers of National Merit Scholars (e.g., Oklahoma), but not provide as much merit aid in other forms. Still other schools (Alabama) fund both NMS aid and other merit aid at generous levels. And then there are the public elites that fund little or no aid that is not need-based.

 

North Dakota–41.73% of freshmen–$1,173 per student

Truman State–40.5% of freshmen–$4,693 per student

South Carolina–39.1% of freshmen–$5,253 per student

Vermont–33.3% of freshmen–$9,283 per student

Iowa State–32.6% of freshmen–$3,049 per student

Miami Ohio–31.3% of freshmen–$8,174 per student

West Virginia–30.7% of freshmen–$2,604 per student

Ohio State–29.9% of freshmen–$6,757 per student

UT Dallas–29.8% of freshmen–$13,766 per student

Auburn–29.6% of freshmen–$5,976 per student

Montana–29.3% of freshmen–$3,250 per student

SUNY Plattsburgh–28.9% of freshmen–$6,237 per student

Clemson–27.4% of freshmen–$7,456 per student

Alabama Huntsville–27.1% of freshmen–$7,494 per student

Oklahoma State–27% of freshmen–$6,291 per student

Colorado–26.9% of freshmen–$9,497 per student

Michigan Tech–26.7% of freshmen–$5,367 per student

Troy Univ–26.5% of freshmen–$5,132 per student

Arizona State–25.7% of freshmen–$7,733 per student

Col School of Mines–25.6% of freshmen–$7,391 per student

Mississippi–25.6% of freshmen–$6,876 per student

Alabama Birmingham–24.7% of freshmen–$8,020 per student

Delaware–24.6% of freshmen–$6,074 per student

Salibury–24.5% of freshmen–$2,127 per student

South Dakota–24.5% of freshmen–$4,505 per student

Southern Utah–24.5% of freshmen–$3,863 per student

Alabama–24.4% of freshmen–$11,919 per student

Arizona–24% of freshmen–$8,137 per student

Kansas State–24% of freshmen–$4,145 per student

Mississippi State–24% of freshmen–$3,527 per student

Iowa–23% of freshmen–$4,115 per student

Oklahoma–22.7% of freshmen–$4,540 per student

Kentucky–22% of freshmen–$7,789 per student

Missouri–21.1% of freshmen–$4,763 per student

Idaho–21.1% of freshmen–$3,133 per student

Maryland–19.9% of freshmen–$6,451 per student

Michigan–17.9% of freshmen–$4,938 per student

Indiana–17.6% of freshmen–$7,671 per student

Minnesota –17.4% of freshmen–$5,875 per student

Kansas–17.4% of freshmen–$3,235 per student

Arkansas-16.3% of freshmen–$4,145 per student

LSU–15.2% of freshmen–$3,233 per student

Alaska Fairbanks–15% of freshmen–$4,306 per student

Tennessee–13.8% of freshmen–$1,571 per student

New Hampshire–13% of freshmen–$8,020 per student

UC Berkeley–13% of freshmen–$4,583 per student

Maine–12.8% of freshmen–$4,030 per student

Connecticut–12.8% of freshmen–$7,045 per student

Rutgers–12.1% of freshmen–$4,300 per student

Massachusetts–11.8% of freshmen–$4,386 per student

Nebraska–11.6% of freshmen–$5,589 per student

Illinois–10.9% of freshmen–$3,980 per student

Rhode Island–9% of freshmen–$6,354 per student

Penn State–7.8% of freshmen–$3,230 per student

Utah–7.7% of freshmen–$7,917 per student

Wisconsin–7% of freshmen–$3,989 per student

Georgia–6.9% of freshmen–$2,019 per student

Florida–5.4% of freshmen–$2,000 per student

Oregon–5.3% of freshmen–$5,207 per student

North Carolina–3.2% of freshmen–$8,393 per student

Univ at Buffalo SUNY–2.6% of freshmen–$6,030 per student

Virginia–2.5% of freshmen–$5,821 per student

Washington–2% of freshmen–$7,000 per student

UT Austin–1% of freshmen–$5,586 per student

 

 

 

 

The Academic Reputation Ranking in U.S. News: What It Means for Honors Students

Editor’s Note: This post was updated on August 15, 2017, to include new honors class size averages based on our most recent data.

In a previous post, Based on Academic Reputation Alone, Publics Would Be Higher in U.S. News Rankings, we write that many public universities have a reputation in the academic community that is much higher than their overall ranking by U.S. News.  In this post, we will summarize the reasons that prospective honors students and their parents might consider paying more attention to academic reputation than to other factors in the oft-cited rankings.

(Another related post: Alternative U.S. News Rankings: Lots of Surprises.)

First, these are factors to consider if the state university’s academic reputation is much stronger than its overall ranking:

1.  The overall rankings penalize public universities for their typically larger class sizes, but the average honors class size in the 50 major honors programs we track is 26.3 students, much smaller than the average class size for the universities as a whole.  Most of these honors classes are lower-division, where the preponderance of large classes is often the norm. First-year honors seminars and classes for honors-only students average 19 students per section.  Result:  the relatively poor rating the whole university might receive for class size is offset for honors students.

2.  The overall rankings hit some public universities hard for having relatively low retention and graduation percentages, but freshmen retention rates in honors programs are in the 90% range and higher; meanwhile six-year grad rates for honors entrants average 89%–much higher than the average rates for the universities as a whole.  Result: the lower rates for the universities as a whole are offset for honors students.

3.  All public universities suffer in the overall rankings because U.S. News assigns ranking points for both the wealth of the university as a whole and for the impact that wealth has on professors’ salaries, smaller class sizes, etc.  This is a double whammy in its consideration of inputs and outputs separately; only the outputs should be rated.  Result: the outputs for class size (see above) are offset for honors students, and the wealth of the university as an input should not be considered in the first place.

4.  For highly-qualified students interested in graduate or professional school, academic reputation and the ability to work with outstanding research faculty are big advantages. Honors students have enhanced opportunities to work with outstanding faculty members even in large research universities, many of which are likely to have strong departmental rankings in the student’s subject area.  Result: honors students are not penalized for the research focus of public research universities; instead, they benefit from it.

5.  Many wealthy private elites are generous in funding all, or most, need-based aid, but increasingly offer little or no merit aid.  This means that families might receive all the need-based aid they “deserve” according to a federal or institutional calculation and still face annual college costs of $16,000 to $50,000.  On the other hand, national scholars and other highly-qualified students can still receive significant merit aid at most public universities.  Result: if a public university has an academic reputation equal to that of a wealthy private elite, an honors student could be better off financially and not suffer academically in a public honors program.

But…what if the academic reputation of the public university is lower than that of a private school under consideration?   In this case, the public honors option should offer the following offsets:

1. The net cost advantage of the public university, including merit aid, probably needs to be significant.

2.  It is extremely important to evaluate the specific components of the honors program to determine if it provides a major “value-added” advantage–is it, relatively, better than the university as a whole.  Typically, the answer will be yes.  To determine how much better, look at the academic disciplines covered by the honors program, the actual class sizes, retention and graduation rates, research opportunities, and even honors housing and perks, such as priority registration.

Honors Education: More than Rubrics, Templates, and Outcomes

Editor’s note: The following essay is by Dr. Joan Digby, a professor at Long Island University and Director of the Honors program.  Although we look at basic “outcomes” in trying to evaluate public honors colleges and programs, we agree with Dr. Digby’s criticism of the growing regimentation of higher ed in America and the current over-emphasis on business and bureaucratic terminology.  Our abandonment of numerical rankings reflects our own concern that there are limits to quantifying the real value of higher learning.  This essay is from the website of the National Collegiate Honors Council….

When my goddaughter was eight years old, she was permitted to come from London to New York for a two-week visit. Elanor was precocious and had been asking when she could make this trip from the time she was four. When eight arrived, she was packed and ready. I had never had children, so living with an eight-year-old was an intense experience. What she mainly wanted to do was solve Rubik’s Cube in five minutes flat. When that didn’t happen, she erupted into a volcano of screams and tears. Eventually she figured out how to solve the puzzle and brought her completion time down to about three minutes.

If Ernő Rubik were naming his puzzle, today he would probably go for the pun and call it Rubric’s Cube since rubrics are all people talk about now in education. Remember when the word “paradigm” appeared in every high-toned article? Well, it has been replaced by “rubric.” Here a rubric, there a rubric, everywhere a rubric rubric . . . Old MacDonald had several, and they all add up to little boxes far less colorful and ingenious than Rubik’s Cube.

I’m betting that most of the people who use the word “rubric” know very little about its meaning or history. Rubric means red ochre—red earth—as in BryceCanyon and Sedona. Red headers were used in medieval manuscripts as section or chapter markers, and you can bet that the Whore of Babylon got herself some fancy rubrics over the years. Through most of its history, the word has been attached to religious texts and liturgy; rubrics were used as direction indicators for conducting divine services. In a system that separates church and state, it’s a wonder that the word has achieved so universal a secular makeover. Now it’s just a fancy word for a scoring grid. Think boxes! Wouldn’t they look sweet colored in red?

For decades I have been involved in university honors education. The essence of the honors approach is, dare I say, teaching “outside the box.” Everyone knows that you can’t put round ideas into square boxes, everyone except the people who do “outcomes assessment,” the pervasive vogue in filling in squares with useless information. Here, for example, is the classic definition of rubric as spelled out by the authors of a terrifying little handbook designed to help people who are still awake at three in the morning looking to speed up grading papers:  “At its most basic, a rubric is a scoring tool that lays out the specific expectations for an assignment” (Stevens and Levi 3). There it is, a “tool” to measure “specific expectations,” and those are precisely what we do not want to elicit from students, especially in honors but to my mind across the university.

My goal is not to score or measure students against preconceived expectations but to encourage the unexpected, the breakthrough response that is utterly new, different, and thus exciting—such as a recent student analysis of Melville’s “Bartleby the Scrivener” in light of the “Occupy Wall Street” movement, an approach that made me rethink the story altogether. The operative word here is “think.” Students attend college, in part, to learn how to think, and we help them engage deeply in “critical thinking.” Wouldn’t it then be hypocritical to take their thoughtful reflections and score them like mindless robots, circling or checking little boxes? Sure it would. That is why, whenever I hear anyone suggest using a “rubric” to grade an essay, I want to let out the bloodcurdling (appropriately red image) scream of an eight-year-old. I’m practicing. I can do it.

What I can’t and won’t do is fill in the little boxes. My field is literature—that is, thought and sensibility expressed in words. My field encourages the subjective, anecdotal, oddly shaped experiences that constitute creative writing. I can tell you a thousand stories about my students, how and what they learn and what will be the outcome of their education. I know their outcome (the plural is ugly) because I write to them for years after they leave school. Many are now my colleagues on campus and my friends all over the world. I can tell you their stories, but I can’t and won’t fill in boxes pretending that these will turn into measurable data. If my colleagues want to do the boxes, I won’t object, but “I’d prefer not to.”

Nor will I read portfolios and brood on what can be gathered about the student writers. English teachers read papers for a living. We assess them, write useful comments, and then return them graded to the students so that they can revise. Doing this is in our blood. For what reason would we dive into a pile of papers on which we are prohibited from writing comments for the sake of producing statistics that don’t even go back to the authors? All writers need suggestions and corrections. If we are not reading papers with the express purpose of providing the students with constructive help, then the act of reading is a waste of time.

I regret to acknowledge that the language and fake measuring tools of the data crunchers have infected even my own department, which now has been coerced into producing lists of goals and objectives with such chalk-grating phrases as “students will use writing as a meaning-making tool” and “generate an interpretation of literature . . . .” Not only the mechanistic language of the document but the fascistic insistence that students “will do” this or that strikes me as an utterly dystopian vision of a university education.

At the very least, English departments everywhere should be the ones to point out that goals and objectives are synonyms and that what the assessment folks really mean are goals and strategies for achieving them.  But “goals and objectives” has become a cant phrase at the core of the outcomes ritual, and I’m afraid there is nothing much we can do to change that.

Whoever came up with the phrase “outcomes assessment” probably has no idea how a liberal education works. We teach, students learn, and, if we are lucky, students reciprocally teach us something in a symbiotic relationship that does not require external administration. It works like this: students attend classes, read, write, engage in labs and other learning activities, pass their courses, even do well, and in time graduate. Faculty enjoy teaching and feel rewarded by the successes of their students. Bingo. That’s it. Nothing more to say or prove. No boxes to fill in. Anyone with an urge to produce data can take attendance at Commencement.

Other horrors have bubbled up to pollute the waters of our Pierian Spring. In addition to rubrics, we now have templates for everything we do. A template is essentially a mold that lets us replicate a structure. In different industries it means a gauge or guide, a horizontal beam functioning to distribute weight, or a wedge used to support a ship’s keel. You can find out more at students’ new best friend, www.dictionary.com.  Yet nowhere in this most accessible word hoard is there a specifically academic meaning for template, a word that must come up at least once in every academic meeting. The template craze implies that everything we do can and must be measured to fit a certain mold.  Not only the word but the increasing use of templates in the university reveal the degree to which academia has become an industrial operation.

In fact, we don’t need templates any more than we need rubrics. They come from the same family of low-level ideas responsible for the mechanical modes of teaching that I reject. If I were a medievalist, I would write an allegorical morality play, an updated version of The Castle of Perseverance, in which virtuous Professors battle vicious Rubrics and Templates, winning the day by driving them off with Open Books—

I concede, maybe Digital Books!

University education, what’s left of it, is at a decisive crossroad that requires us to take a stand against the models that administrations and consultants and accrediting agencies are forcing on us. The liberal arts and sciences are under serious attack, and, if we don’t defend the virtues of imagination and spontaneity in our classes, we will all be teaching from rigid syllabi according to rubrics and templates spelled out week by week as teachers of fifth-grade classes are forced to do.

It so happens that my grandmother, born in 1887, was a fifth-grade teacher. Every Sunday evening she sat at the kitchen table filling out hour-by-hour syllabi for the week to come. I remember a book with little cards, like the library cards we used to tuck into book pockets. No pun intended, but her last name was Tuck. Even then my grandmother resented the mechanical nature of her obligation, calling it with utter contempt “busy work.”

Part of what convinced me to go into college teaching was the desire to avoid busy work and to teach what I was trained to do without people peering over my shoulder or making me fill out needless forms. Throughout my career I have given students general reading lists, telling them that we will get through as many of the works as our discussions allow, eliminate some and add others if our interests take us in different directions. I always say, “There are no literature police to come and check on whether we have read exactly what is printed on this paper.”

But now the literature police have arrived. More and more there is pressure to write a syllabus and stick to it so as to meet absurdly regimented, generally fictitious, and misnamed goals and objectives. This is no way to run a university course and is instead the surest way to drive inspiration out of university teaching and learning.

Tragically, the university is rapidly becoming fifth grade. The terminology that has seeped into university teaching from the lower grades has, to my great horror, also mated with business so that the demons we are now facing believe that we will do as we are told by top-down management so that we attract students, bring in tuition dollars, increase endowments, and pass Go with our regional accreditation bodies. If this sounds like a board game, it is—or perhaps a computer game since everything seems to be played out in distance learning, distance teaching, anything but face-to-face, open-ended, free-form discussion and debate. This pernicious trend has made me one Angry Bird!

Around the campus I see that my young colleagues are running scared. They are afraid that they won’t get tenure and that tenure itself will soon disappear. They are afraid that their small department will be absorbed by another, bigger one. They are afraid that their classes will be cancelled and they will ultimately lose their jobs. We are not in familiar territory because all of the power and control have been misappropriated by business operatives calling for outcomes. We need to remind them that a university—and especially an honors program—is in essence a faculty teaching students. Administrators are hired hands secondary to this endeavor. Moreover, only one outcome is important: students graduate and go into the world to become the next generation of educated people. We need to clear all the rubrics and templates out of the way so that we can teach and they can learn.

To my mind there is nothing but folly in searching for “measurable outcomes”; this is a quest as doomed as searching for the meaning of life. Those who remember Monty Python will get the idea and imagine the Knights Templates dressed up in rubric baldrics, entertaining us with a jolly good “Outcomes Assessment Joust.”

Reference

Stevens, Dannelle D., and Levi, Antonia J. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback and Promote Student Learning. Sterling, VA:  Stylus Publishing, 2013.

The author may be contacted at Joan.Digby@liu.edu.

Ohio U Honors Tutorial Grad Hits the Big Time in Sports Journalism at Age 25

You may know the name Allie LaForce, especially if  you’re a sports fan.  Current a co-host of “Lead Off,” the nightly talk show on the CBS Sports Network, LaForce, only 25, was only a very few years ago a point guard on the Ohio University basketball team.

But her basketball days at Ohio U came after she was Miss Teen USA…after she was the valedictorian of her high school class in Vermilion, Ohio… after she was a model in New York… and after she was a guest star on a soap opera.

And another big thing came along after all those accomplishments: LaForce studied broadcast journalism as a member of the highly selective Honors Tutorial College at Ohio University, an elite group of students with an average SAT score of 1380.  Just another pretty face, hardly.

“LaForce graduated in 2011 from the Honors Tutorial College with a Bachelor of Science in Journalism,” according to Ohio U.  “She then became a reporter and anchor for Fox 8 in Cleveland….LaForce had also worked as a sideline reporter for the NCAA Tournament and has also been the halftime host and sideline reporter for the Sun Bowl.

As a sideline reporter during a Colts-Patriot NFL game, “LaForce was reported to be ‘killing it in her pieces every time CBS shot down to the sidelines for a report’ by Fansided, an independent online sports network.”

To learn more about Allie LaForce, check out an interview filmed with her last year on campus at https://www.youtube.com/watch?v=Bwq3U-gpfFw#t=139.

U.S. Has 24 of Top 50 Engineering and Tech Universities in the World

According to the Times Higher Education World University Rankings, the U.S. is home to 24 of the top 50 engineering and technology universities in the world.

It is also notable that 13 of the 24 U.S. institutions are public universities.  The United States also has the top four schools on the Times list.

“The 2012-2013 Times Higher Education World University Rankings’ Engineering and Technology table judges world class universities across all of their core missions – teaching, research, knowledge transfer and international outlook. The ranking of the world’s top 50 universities for engineering and technology employs 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons available, which are trusted by students, academics, university leaders, industry and governments.”

Here are the U.S. universities on the list, along with their rank:

1. Caltech

2. Princeton

3. MIT

4. UC Berkeley

5. Stanford

7. UCLA

9. Georgia Tech

13. UT Austin

15. Carnegie Mellon

16. Northwestern

17. UC Santa Barbara

18. Cornell

19. Michigan

20. Illinois

21. Columbia

26. Penn

30. Rice

34. Washington

36. UC San Diego

41. Wisconsin

42. Purdue

45. Minnesota

48. UC Davis

49. Duke

 

Top Texas High Schools End Class Ranking to Avoid ‘Ten Percent Rule’

Editor’s note: This post is based on an article in the Austin American Statesman, by reporter Benjamin Wermund.

To get around some of the effects of the “top ten percent” rule in Texas, which requires public colleges to admit applicants who graduate in the top tenth of their high school classes, some leading Texas high schools no longer rank most students as part of a strategy to improve the chances of their highly-prepared graduates to gain entrance to UT Austin and Texas A&M.

How does this work?  Well, if students with very high gpa’s and test scores don’t make the top ten percent at intensely competitive high schools, not being ranked at all appears to net a more individual assessment that yields higher admission rates.

The Eanes district, including students from the affluent Austin suburb of Westlake, was the first to try the strategy.  The district did not rank 90 percent of its graduates, and saw its acceptance rate at UT Austin improve by 39 percent and the rate at Texas A&M improve by 49 percent.

To graduate in the top ten percent at Westlake High School, a student must have straight A’s along with multiple advanced placement classes.   Graduating in the top ten percent at a low-performing high school could be achieved with, say, a 3.5 gpa, no AP classes, and low test scores.

(UT Austin this year admitted students from among the top 7 percent of high school grads, not the top 10 percent, under rules that allow the university to adjust the percentage based on projected space in the class.)

So far, at least three other competitive districts in the Austin area have decided to stop ranking all of their students.

Other districts are waiting to see the full effects of the strategy.  Admissions officials claim that they can “ballpark” the percentage but that this process takes longer and could actually result in delays for applicants.

Because the estimation process may eventually reduce the current advantage that some high-performing students receive from not being ranked, the Leander district northwest of Austin leaves the choice of ranking in the hands of students and parents.

The Austin ISD, on the other hand, leaves the choice to individual campuses.  The very competitive Liberal Arts and Sciences Academy and the Ann Richards School for Young Women Leaders are the only campuses that have stopped ranking most students.

 

UC Irvine Remains Top U.S. University under 50 Years Old; UT Dallas Makes Big Gains

The Times Higher Education: Top Universities in the World Under 50 Years Old publication was released today, and the same U.S. institutions that made the list in 2012 also appear in the latest edition, with UC Irvine at the top.

Even though the same eight U.S. universities are among the top 100 “young” institutions worldwide, their places relative to similar universities have changed: UC Irvine dropped only one place, from number 5 to number 4, while UT Dallas leaped from number 29 to number 15.

It is important to bear in mind that all of the world university rankings emphasize research far more than typical domestic rankings, such as those by U.S. News, which reflects research quality only as it influences the academic reputation component of the magazine’s methodology.

Here is the methodology used by Thomson Reuters and the Times are these:

Research: volume, income and reputation (30 per cent)

  • Citations: research influence (30 per cent)
  • Teaching: the learning environment (30 per cent)
  • International outlook: people 
  • and research (7.5 per cent)
  • Industry income: innovation (2.5 per cent).

Top U.S. Universities Under 50 Years Old (2013):

UC Irvine–2013 rank 5; 2012 rank 4

UC Santa Cruz–2013 rank 11, 2012 rank 7

UT Dallas–2013 rank 15; 2012 rank 29

Illinois Chicago–2013 rank 19; 2012 rank 11

George Mason–2013 rank 59; 2012 rank 57

UMD Baltimore Co–2013 rank 60; 2012 rank 63

UT San Antonio–2013 rank 70; 2012 rank 53

Florida International–2013 rank 84; 2012 rank 84

 

 

 

 


 

RIT President on Deficiencies of College Rankings

Bill Destler, president of Rochester Institute of Technology, recently posted on the Huff Post College Page that college rankings are universally deficient because of their focus on “inputs” such as SAT scores and high school gpa’s.  He might well have added financial resources to the list.

Although we have published a de facto ranking of public university honors programs that isn’t based on any of these criteria, we agree that all rankings, including our own, have deficiencies. 

We also agree with Destler that the annual Forbes rankings are the most deficient of all because, even though they claim to rely on output measures, they also focus on “data” from Rate My Professor and Payscale.Com.  The first is subjective, and the second reduces the value of a college education to dollars and cents. 

The Forbes rankings also have a strong bias against public universities, part of which comes from a desire on the part of the people behind the rankings to “reform” public universities so that they become places for cheap, assembly-line education rather than research institutions with outstanding academic programs.

Destler also points out that rankings based on return on investment will only affirm what most people already know:  universities with large numbers of STEM grads, especially in engineering, will necessarily fare better in the rankings because engineering as a field often provides excellent starting salaries for new graduates.

Destler is also correct in claiming that all rankings distort the value of universities to the extent that their rankings methodology apply uniform input measures to essentially dissimilar institutions.  Grad rates for a private university that accepts only 6 percent of its applicants will clearly be higher than a college with a 65 percent acceptance rate based on the extremely selectivity of the former.

In the case of our rankings, we would say that the overreach is somewhat less severe, since all the honors programs we follow have much lower acceptance rates than most colleges and because our dominant category is honors curriculum, which can be extensive and demanding regardless of admissions requirements.

In addition, we have two basic rankings, Overall Excellence and Honors Factors Only.  The former does generally favor honors programs in universities that have more uniform excellence across the student body because there is a metric for achieving Rhodes, Marshall, Truman, and Goldwater awards by all students, and not just those won by honors students.

But Honors Factors excludes the metric for prestigious award and is based strictly on honors-specific elements such as curriculum, grad rates, honors housing, and study-abroad programs.

In the end, our rankings are only suggestive, not definitive.  The same is true for all rankings.  They are best used to suggest possible routes on the journey rather than pinpointing the final destination.


 


Goldwater Scholarships 2013: Public University Leaders

The most prestigious undergraduate scholarship is awarded annually by the Barry M. Goldwater Foundation to outstanding students majoring in mathematics, science, engineering, or computer science, and this year eight of the major universities we follow on this site won three awards each.

Public universities with three Goldwater scholars for 2013 are Colorado, Maryland, Michigan, Minnesota, North Carolina State, Oregon State, Pitt, UT Austin, Washington State, and Wisconsin.  The leader among all public universities is Montana State University, with the maximum number of scholarships allowed–four.  North Carolina State is the overall leader the past two years, with a total of seven Goldwater scholars.

The Goldwater awards are important indicators of the value of undergraduate research and the attention young scholars receive.  Out of 271 of the $7,500 scholarships awarded in 2013, a total of 159 went to science majors; 71 to engineering majors; 27 to math majors; and 14 to computer science majors.

Goldwater scholars are also highly represented among winners of graduate awards such as Rhodes, Marshall, and Churchill scholarships.  In recent years, 80 Goldwater winners have also won Rhodes scholarships; 118 have earned Marshall scholarships; and an extremely high number of 110 Goldwater scholars have gone on to win Churchill scholarships.

Below is a list of the universities we follow that have at least four Goldwater awards in the last two years.

North Carolina State: 2013 (3); 2012 (4)

Kansas: 2013 (2); 2012 (4)

Minnesota: 2013 (3); 2012 (3)

Nebraska: 2013 (2); 2012 (4)

Pitt: 2013 (3); 2012 (3)

UT Austin: 2013 (3); 2012 (3)

Colorado: 2013 (3); 2012 (2)

Oregon State: 2013 (3); 2012 (2)

Michigan: 2013 (3); 2012 (2)

Wisconsin: 2013 (3); 2012 (2)

Alabama: 2013 (2); 2012 (3)

Washington State: 2013 (2); 2012 (3)

Georgia: 2013 (1); 2012 (4)

Maryland: 2013 (3); 2012 (2)

South Carolina: 2013 (2); 2012 (3)

Clemson: 2013 (2); 2012 (2)

Illinois: 2013 (2); 2012 (2)

UMass Amherst: 2013 (1); 2012 (3)

Michigan State: 2013 (2); 2012 (2)

Ohio State: 2013 (1); 2012 (3)

Rutgers: 2013 (2); 2012 (2)

UC San Diego: 2013 (2); 2012 (2)

 

 

 

World Ranking of Subject Areas, U.S. Public Universities

Another thing we like about the annual Times Higher Education World University Rankings is that they list the universities with the top rankings several subject areas, including engineering and technology; arts and humanities (literature, history, philosophy); life sciences (biology, chemistry); physical sciences (physics, geology); and social sciences (sociology, psychology, political science, economics).

Below are the 2013 world rankings of leading U.S. public universities, by subject area:

Engineering and Technology:

UC Berkeley (4)

UCLA (7)

Georgia Tech (9)

UT Austin (13)

UC Santa Barbara (17)

Michigan (19)

Illinois (20)

Washington (34)

Purdue (45)

UC Davis (48)

Arts and Humanities:

UC Berkeley (7)

UCLA (16)

Michigan (18)

Rutgers (20)

UT Austin (22)

Wisconsin (27)

North Carolina (33)

UC San Diego (35)

UMass Amherst (42)

Pitt (45)

Arizona (49)

Virginia (45)

Life Sciences:

UC Berkeley (6)

UCLA (15)

UC San Diego (17)

Michigan (18)

Washington (24)

UC Davis (25)

Wisconsin (30)

North Carolina (35)

UMass Amherst (38)

UC Santa Barbara (39)

Penn State (43)

Illinois (47)

Physical Sciences:

UC Berkeley (2)

UCLA (9)

Washington (14)

UC Santa Barbara (15)

UT Austin (18)

Michigan (20)

Illinois (24)

Colorado (30)

Wisconsin (38)

Georgia Tech (47)

UC Santa Cruz (48)

Social Sciences:

Michigan (12)

UCLA (12)

UC Berkeley (14)

Wisconsin (21)

North Carolina (25)

Washington (27)

UT Austin (28)

Minnesota (29)

UC San Diego (31)

Ohio State (34)

Illinois (34)

Penn State (37)

Michigan State (47)

UC Santa Barbara (49)